Sample records for numerical integration accuracy

  1. A spectral boundary integral equation method for the 2-D Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.

    1994-01-01

    In this paper, we present a new numerical formulation of solving the boundary integral equations reformulated from the Helmholtz equation. The boundaries of the problems are assumed to be smooth closed contours. The solution on the boundary is treated as a periodic function, which is in turn approximated by a truncated Fourier series. A Fourier collocation method is followed in which the boundary integral equation is transformed into a system of algebraic equations. It is shown that in order to achieve spectral accuracy for the numerical formulation, the nonsmoothness of the integral kernels, associated with the Helmholtz equation, must be carefully removed. The emphasis of the paper is on investigating the essential elements of removing the nonsmoothness of the integral kernels in the spectral implementation. The present method is robust for a general boundary contour. Aspects of efficient implementation of the method using FFT are also discussed. A numerical example of wave scattering is given in which the exponential accuracy of the present numerical method is demonstrated.

  2. A novel approach to evaluation of pest insect abundance in the presence of noise.

    PubMed

    Embleton, Nina; Petrovskaya, Natalia

    2014-03-01

    Evaluation of pest abundance is an important task of integrated pest management. It has recently been shown that evaluation of pest population size from discrete sampling data can be done by using the ideas of numerical integration. Numerical integration of the pest population density function is a computational technique that readily gives us an estimate of the pest population size, where the accuracy of the estimate depends on the number of traps installed in the agricultural field to collect the data. However, in a standard mathematical problem of numerical integration, it is assumed that the data are precise, so that the random error is zero when the data are collected. This assumption does not hold in ecological applications. An inherent random error is often present in field measurements, and therefore it may strongly affect the accuracy of evaluation. In our paper, we offer a novel approach to evaluate the pest insect population size under the assumption that the data about the pest population include a random error. The evaluation is not based on statistical methods but is done using a spatially discrete method of numerical integration where the data obtained by trapping as in pest insect monitoring are converted to values of the population density. It will be discussed in the paper how the accuracy of evaluation differs from the case where the same evaluation method is employed to handle precise data. We also consider how the accuracy of the pest insect abundance evaluation can be affected by noise when the data available from trapping are sparse. In particular, we show that, contrary to intuitive expectations, noise does not have any considerable impact on the accuracy of evaluation when the number of traps is small as is conventional in ecological applications.

  3. DE 102 - A numerically integrated ephemeris of the moon and planets spanning forty-four centuries

    NASA Technical Reports Server (NTRS)

    Newhall, X. X.; Standish, E. M.; Willams, J. G.

    1983-01-01

    It is pointed out that the 1960's were the turning point for the generation of lunar and planetary ephemerides. All previous measurements of the positions of solar system bodies were optical angular measurements. New technological improvements leading to immense changes in observational accuracy are related to developments concerning radar, Viking landers on Mars, and laser ranges to lunar corner cube retroreflectors. Suitable numerical integration techniques and more comprehensive physical models were developed to match the accuracy of the modern data types. The present investigation is concerned with the first integrated ephemeris, DE 102, which covers the entire span of the historical astronomical observations of usable accuracy which are known. The fit is made to modern data. The integration spans the time period from 1411 BC to 3002 AD.

  4. Algorithms For Integrating Nonlinear Differential Equations

    NASA Technical Reports Server (NTRS)

    Freed, A. D.; Walker, K. P.

    1994-01-01

    Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.

  5. hp-Adaptive time integration based on the BDF for viscous flows

    NASA Astrophysics Data System (ADS)

    Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.

    2015-06-01

    This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.

  6. Numerical solution of boundary-integral equations for molecular electrostatics.

    PubMed

    Bardhan, Jaydeep P

    2009-03-07

    Numerous molecular processes, such as ion permeation through channel proteins, are governed by relatively small changes in energetics. As a result, theoretical investigations of these processes require accurate numerical methods. In the present paper, we evaluate the accuracy of two approaches to simulating boundary-integral equations for continuum models of the electrostatics of solvation. The analysis emphasizes boundary-element method simulations of the integral-equation formulation known as the apparent-surface-charge (ASC) method or polarizable-continuum model (PCM). In many numerical implementations of the ASC/PCM model, one forces the integral equation to be satisfied exactly at a set of discrete points on the boundary. We demonstrate in this paper that this approach to discretization, known as point collocation, is significantly less accurate than an alternative approach known as qualocation. Furthermore, the qualocation method offers this improvement in accuracy without increasing simulation time. Numerical examples demonstrate that electrostatic part of the solvation free energy, when calculated using the collocation and qualocation methods, can differ significantly; for a polypeptide, the answers can differ by as much as 10 kcal/mol (approximately 4% of the total electrostatic contribution to solvation). The applicability of the qualocation discretization to other integral-equation formulations is also discussed, and two equivalences between integral-equation methods are derived.

  7. On the numeric integration of dynamic attitude equations

    NASA Technical Reports Server (NTRS)

    Crouch, P. E.; Yan, Y.; Grossman, Robert

    1992-01-01

    We describe new types of numerical integration algorithms developed by the authors. The main aim of the algorithms is to numerically integrate differential equations which evolve on geometric objects, such as the rotation group. The algorithms provide iterates which lie on the prescribed geometric object, either exactly, or to some prescribed accuracy, independent of the order of the algorithm. This paper describes applications of these algorithms to the evolution of the attitude of a rigid body.

  8. Efficiency analysis of numerical integrations for finite element substructure in real-time hybrid simulation

    NASA Astrophysics Data System (ADS)

    Wang, Jinting; Lu, Liqiao; Zhu, Fei

    2018-01-01

    Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.

  9. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    NASA Astrophysics Data System (ADS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational efficiency, the OTSEM is more efficient than the Fekete-based TSEM, although it is slightly costlier than the QSEM when a comparable numerical accuracy is required.

  10. Numerical integration techniques for curved-element discretizations of molecule-solvent interfaces.

    PubMed

    Bardhan, Jaydeep P; Altman, Michael D; Willis, David J; Lippow, Shaun M; Tidor, Bruce; White, Jacob K

    2007-07-07

    Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, here methods were developed to model several important surface formulations using exact surface discretizations. Following and refining Zauhar's work [J. Comput.-Aided Mol. Des. 9, 149 (1995)], two classes of curved elements were defined that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. Numerical integration techniques are presented that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, a set of calculations are presented that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planar-triangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute-solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that the methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online as supplemental material.

  11. Fast algorithms for Quadrature by Expansion I: Globally valid expansions

    NASA Astrophysics Data System (ADS)

    Rachh, Manas; Klöckner, Andreas; O'Neil, Michael

    2017-09-01

    The use of integral equation methods for the efficient numerical solution of PDE boundary value problems requires two main tools: quadrature rules for the evaluation of layer potential integral operators with singular kernels, and fast algorithms for solving the resulting dense linear systems. Classically, these tools were developed separately. In this work, we present a unified numerical scheme based on coupling Quadrature by Expansion, a recent quadrature method, to a customized Fast Multipole Method (FMM) for the Helmholtz equation in two dimensions. The method allows the evaluation of layer potentials in linear-time complexity, anywhere in space, with a uniform, user-chosen level of accuracy as a black-box computational method. Providing this capability requires geometric and algorithmic considerations beyond the needs of standard FMMs as well as careful consideration of the accuracy of multipole translations. We illustrate the speed and accuracy of our method with various numerical examples.

  12. Application of Numerical Integration and Data Fusion in Unit Vector Method

    NASA Astrophysics Data System (ADS)

    Zhang, J.

    2012-01-01

    The Unit Vector Method (UVM) is a series of orbit determination methods which are designed by Purple Mountain Observatory (PMO) and have been applied extensively. It gets the conditional equations for different kinds of data by projecting the basic equation to different unit vectors, and it suits for weighted process for different kinds of data. The high-precision data can play a major role in orbit determination, and accuracy of orbit determination is improved obviously. The improved UVM (PUVM2) promoted the UVM from initial orbit determination to orbit improvement, and unified the initial orbit determination and orbit improvement dynamically. The precision and efficiency are improved further. In this thesis, further research work has been done based on the UVM: Firstly, for the improvement of methods and techniques for observation, the types and decision of the observational data are improved substantially, it is also asked to improve the decision of orbit determination. The analytical perturbation can not meet the requirement. So, the numerical integration for calculating the perturbation has been introduced into the UVM. The accuracy of dynamical model suits for the accuracy of the real data, and the condition equations of UVM are modified accordingly. The accuracy of orbit determination is improved further. Secondly, data fusion method has been introduced into the UVM. The convergence mechanism and the defect of weighted strategy have been made clear in original UVM. The problem has been solved in this method, the calculation of approximate state transition matrix is simplified and the weighted strategy has been improved for the data with different dimension and different precision. Results of orbit determination of simulation and real data show that the work of this thesis is effective: (1) After the numerical integration has been introduced into the UVM, the accuracy of orbit determination is improved obviously, and it suits for the high-accuracy data of available observation apparatus. Compare with the classical differential improvement with the numerical integration, its calculation speed is also improved obviously. (2) After data fusion method has been introduced into the UVM, weighted distribution accords rationally with the accuracy of different kinds of data, all data are fully used and the new method is also good at numerical stability and rational weighted distribution.

  13. Numerical quadrature methods for integrals of singular periodic functions and their application to singular and weakly singular integral equations

    NASA Technical Reports Server (NTRS)

    Sidi, A.; Israeli, M.

    1986-01-01

    High accuracy numerical quadrature methods for integrals of singular periodic functions are proposed. These methods are based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations. They are used to obtain accurate quadrature methods for the solution of singular and weakly singular Fredholm integral equations. Such periodic equations are used in the solution of planar elliptic boundary value problems, elasticity, potential theory, conformal mapping, boundary element methods, free surface flows, etc. The use of the quadrature methods is demonstrated with numerical examples.

  14. A new numerical approach for uniquely solvable exterior Riemann-Hilbert problem on region with corners

    NASA Astrophysics Data System (ADS)

    Zamzamir, Zamzana; Murid, Ali H. M.; Ismail, Munira

    2014-06-01

    Numerical solution for uniquely solvable exterior Riemann-Hilbert problem on region with corners at offcorner points has been explored by discretizing the related integral equation using Picard iteration method without any modifications to the left-hand side (LHS) and right-hand side (RHS) of the integral equation. Numerical errors for all iterations are converge to the required solution. However, for certain problems, it gives lower accuracy. Hence, this paper presents a new numerical approach for the problem by treating the generalized Neumann kernel at LHS and the function at RHS of the integral equation. Due to the existence of the corner points, Gaussian quadrature is employed which avoids the corner points during numerical integration. Numerical example on a test region is presented to demonstrate the effectiveness of this formulation.

  15. Numerical orbit generators of artificial earth satellites

    NASA Astrophysics Data System (ADS)

    Kugar, H. K.; Dasilva, W. C. C.

    1984-04-01

    A numerical orbit integrator containing updatings and improvements relative to the previous ones that are being utilized by the Departmento de Mecanica Espacial e Controle (DMC), of INPE, besides incorporating newer modellings resulting from the skill acquired along the time is presented. Flexibility and modularity were taken into account in order to allow future extensions and modifications. Characteristics of numerical accuracy, processing quickness, memory saving as well as utilization aspects were also considered. User's handbook, whole program listing and qualitative analysis of accuracy, processing time and orbit perturbation effects were included as well.

  16. Numerical Integration Techniques for Curved-Element Discretizations of Molecule–Solvent Interfaces

    PubMed Central

    Bardhan, Jaydeep P.; Altman, Michael D.; Willis, David J.; Lippow, Shaun M.; Tidor, Bruce; White, Jacob K.

    2012-01-01

    Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, we have developed methods to model several important surface formulations using exact surface discretizations. Following and refining Zauhar’s work (J. Comp.-Aid. Mol. Des. 9:149-159, 1995), we define two classes of curved elements that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. We then present numerical integration techniques that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, we present a set of calculations that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planartriangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute–solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that our methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online at http://web.mit.edu/tidor. PMID:17627358

  17. From h to p efficiently: optimal implementation strategies for explicit time-dependent problems using the spectral/hp element method

    PubMed Central

    Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J

    2014-01-01

    We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840

  18. Numerical simulation of the generation, propagation, and diffraction of nonlinear waves in a rectangular basin: A three-dimensional numerical wave tank

    NASA Astrophysics Data System (ADS)

    Darwiche, Mahmoud Khalil M.

    The research presented herein is a contribution to the understanding of the numerical modeling of fully nonlinear, transient water waves. The first part of the work involves the development of a time-domain model for the numerical generation of fully nonlinear, transient waves by a piston type wavemaker in a three-dimensional, finite, rectangular tank. A time-domain boundary-integral model is developed for simulating the evolving fluid field. A robust nonsingular, adaptive integration technique for the assembly of the boundary-integral coefficient matrix is developed and tested. A parametric finite-difference technique for calculating the fluid- particle kinematics is also developed and tested. A novel compatibility and continuity condition is implemented to minimize the effect of the singularities that are inherent at the intersections of the various Dirichlet and/or Neumann subsurfaces. Results are presented which demonstrate the accuracy and convergence of the numerical model. The second portion of the work is a study of the interaction of the numerically-generated, fully nonlinear, transient waves with a bottom-mounted, surface-piercing, vertical, circular cylinder. The numerical model developed in the first part of this dissertation is extended to include the presence of the cylinder at the centerline of the basin. The diffraction of the numerically generated waves by the cylinder is simulated, and the particle kinematics of the diffracted flow field are calculated and reported. Again, numerical results showing the accuracy and convergence of the extended model are presented.

  19. Numerical techniques in radiative heat transfer for general, scattering, plane-parallel media

    NASA Technical Reports Server (NTRS)

    Sharma, A.; Cogley, A. C.

    1982-01-01

    The study of radiative heat transfer with scattering usually leads to the solution of singular Fredholm integral equations. The present paper presents an accurate and efficient numerical method to solve certain integral equations that govern radiative equilibrium problems in plane-parallel geometry for both grey and nongrey, anisotropically scattering media. In particular, the nongrey problem is represented by a spectral integral of a system of nonlinear integral equations in space, which has not been solved previously. The numerical technique is constructed to handle this unique nongrey governing equation as well as the difficulties caused by singular kernels. Example problems are solved and the method's accuracy and computational speed are analyzed.

  20. Determination of the Ephemeris Accuracy for AJISAI, LAGEOS and ETALON Satellites, Obtained with A Simplified Numerical Motion Model Using the ILRS Coordinates

    NASA Astrophysics Data System (ADS)

    Kara, I. V.

    This paper describes a simplified numerical model of passive artificial Earth satellite (AES) motion. The model accuracy is determined using the International Laser Ranging Service (ILRS) highprecision coordinates. Those data are freely available on http://ilrs.gsfc.nasa.gov. The differential equations of the AES motion are solved by the Everhart numerical method of 17th and 19th orders with the integration step automatic correction. The comparison between the AES coordinates computed with the motion model and the ILRS coordinates enabled to determine the accuracy of the ephemerides obtained. As a result, the discrepancy of the computed Etalon-1 ephemerides from the ILRS data is about 10'' for a one-year ephemeris.

  1. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  2. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn; Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn; Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate newmore » cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational efficiency, the OTSEM is more efficient than the Fekete-based TSEM, although it is slightly costlier than the QSEM when a comparable numerical accuracy is required. - Highlights: • Higher-order cubature points for degrees 7 to 9 are developed. • The effects of quadrature rule on the mass and stiffness matrices has been conducted. • The cubature points have always positive integration weights. • Freeing from the inversion of a wide bandwidth mass matrix. • The accuracy of the TSEM has been improved in about one order of magnitude.« less

  3. Improving the Accuracy of Quadrature Method Solutions of Fredholm Integral Equations That Arise from Nonlinear Two-Point Boundary Value Problems

    NASA Technical Reports Server (NTRS)

    Sidi, Avram; Pennline, James A.

    1999-01-01

    In this paper we are concerned with high-accuracy quadrature method solutions of nonlinear Fredholm integral equations of the form y(x) = r(x) + definite integral of g(x, t)F(t,y(t))dt with limits between 0 and 1,0 less than or equal to x les than or equal to 1, where the kernel function g(x,t) is continuous, but its partial derivatives have finite jump discontinuities across x = t. Such integral equations arise, e.g., when one applied Green's function techniques to nonlinear two-point boundary value problems of the form y "(x) =f(x,y(x)), 0 less than or equal to x less than or equal to 1, with y(0) = y(sub 0) and y(l) = y(sub l), or other linear boundary conditions. A quadrature method that is especially suitable and that has been employed for such equations is one based on the trepezoidal rule that has a low accuracy. By analyzing the corresponding Euler-Maclaurin expansion, we derive suitable correction terms that we add to the trapezoidal rule, thus obtaining new numerical quadrature formulas of arbitrarily high accuracy that we also use in defining quadrature methods for the integral equations above. We prove an existence and uniqueness theorem for the quadrature method solutions, and show that their accuracy is the same as that of the underlying quadrature formula. The solution of the nonlinear systems resulting from the quadrature methods is achieved through successive approximations whose convergence is also proved. The results are demonstrated with numerical examples.

  4. Improving the Accuracy of Quadrature Method Solutions of Fredholm Integral Equations that Arise from Nonlinear Two-Point Boundary Value Problems

    NASA Technical Reports Server (NTRS)

    Sidi, Avram; Pennline, James A.

    1999-01-01

    In this paper we are concerned with high-accuracy quadrature method solutions of nonlinear Fredholm integral equations of the form y(x) = r(x) + integral(0 to 1) g(x,t) F(t, y(t)) dt, 0 less than or equal to x less than or equal to 1, where the kernel function g(x,t) is continuous, but its partial derivatives have finite jump discontinuities across x = t. Such integrals equations arise, e.g., when one applies Green's function techniques to nonlinear two-point boundary value problems of the form U''(x) = f(x,y(x)), 0 less than or equal to x less than or equal to 1, with y(0) = y(sub 0) and g(l) = y(sub 1), or other linear boundary conditions. A quadrature method that is especially suitable and that has been employed for such equations is one based on the trapezoidal rule that has a low accuracy. By analyzing the corresponding Euler-Maclaurin expansion, we derive suitable correction terms that we add to the trapezoidal thus obtaining new numerical quadrature formulas of arbitrarily high accuracy that we also use in defining quadrature methods for the integral equations above. We prove an existence and uniqueness theorem for the quadrature method solutions, and show that their accuracy is the same as that of the underlying quadrature formula. The solution of the nonlinear systems resulting from the quadrature methods is achieved through successive approximations whose convergence is also proved. The results are demonstrated with numerical examples.

  5. Formal Solutions for Polarized Radiative Transfer. II. High-order Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janett, Gioele; Steiner, Oskar; Belluzzi, Luca, E-mail: gioele.janett@irsol.ch

    When integrating the radiative transfer equation for polarized light, the necessity of high-order numerical methods is well known. In fact, well-performing high-order formal solvers enable higher accuracy and the use of coarser spatial grids. Aiming to provide a clear comparison between formal solvers, this work presents different high-order numerical schemes and applies the systematic analysis proposed by Janett et al., emphasizing their advantages and drawbacks in terms of order of accuracy, stability, and computational cost.

  6. Enhancement Of Reading Accuracy By Multiple Data Integration

    NASA Astrophysics Data System (ADS)

    Lee, Kangsuk

    1989-07-01

    In this paper, a multiple sensor integration technique with neural network learning algorithms is presented which can enhance the reading accuracy of the hand-written numerals. Many document reading applications involve hand-written numerals in a predetermined location on a form, and in many cases, critical data is redundantly described. The amount of a personal check is one such case which is written redundantly in numerals and in alphabetical form. Information from two optical character recognition modules, one specialized for digits and one for words, is combined to yield an enhanced recognition of the amount. The combination can be accomplished by a decision tree with "if-then" rules, but by simply fusing two or more sets of sensor data in a single expanded neural net, the same functionality can be expected with a much reduced system cost. Experimental results of fusing two neural nets to enhance overall recognition performance using a controlled data set are presented.

  7. A new operational approach for solving fractional variational problems depending on indefinite integrals

    NASA Astrophysics Data System (ADS)

    Ezz-Eldien, S. S.; Doha, E. H.; Bhrawy, A. H.; El-Kalaawy, A. A.; Machado, J. A. T.

    2018-04-01

    In this paper, we propose a new accurate and robust numerical technique to approximate the solutions of fractional variational problems (FVPs) depending on indefinite integrals with a type of fixed Riemann-Liouville fractional integral. The proposed technique is based on the shifted Chebyshev polynomials as basis functions for the fractional integral operational matrix (FIOM). Together with the Lagrange multiplier method, these problems are then reduced to a system of algebraic equations, which greatly simplifies the solution process. Numerical examples are carried out to confirm the accuracy, efficiency and applicability of the proposed algorithm

  8. High order volume-preserving algorithms for relativistic charged particles in general electromagnetic fields

    NASA Astrophysics Data System (ADS)

    He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong

    2016-09-01

    We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.

  9. Investigation of ODE integrators using interactive graphics. [Ordinary Differential Equations

    NASA Technical Reports Server (NTRS)

    Brown, R. L.

    1978-01-01

    Two FORTRAN programs using an interactive graphic terminal to generate accuracy and stability plots for given multistep ordinary differential equation (ODE) integrators are described. The first treats the fixed stepsize linear case with complex variable solutions, and generates plots to show accuracy and error response to step driving function of a numerical solution, as well as the linear stability region. The second generates an analog to the stability region for classes of non-linear ODE's as well as accuracy plots. Both systems can compute method coefficients from a simple specification of the method. Example plots are given.

  10. A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆

    PubMed Central

    Ying, Wenjun; Henriquez, Craig S.

    2013-01-01

    This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600

  11. Recent advances in computational-analytical integral transforms for convection-diffusion problems

    NASA Astrophysics Data System (ADS)

    Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.

    2017-10-01

    An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.

  12. Exponential integrators in time-dependent density-functional calculations

    NASA Astrophysics Data System (ADS)

    Kidd, Daniel; Covington, Cody; Varga, Kálmán

    2017-12-01

    The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.

  13. NUMERICAL INTEGRAL OF RESISTANCE COEFFICIENTS IN DIFFUSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Q. S., E-mail: zqs@ynao.ac.cn

    2017-01-10

    The resistance coefficients in the screened Coulomb potential of stellar plasma are evaluated to high accuracy. I have analyzed the possible singularities in the integral of scattering angle. There are possible singularities in the case of an attractive potential. This may result in a problem for the numerical integral. In order to avoid the problem, I have used a proper scheme, e.g., splitting into many subintervals where the width of each subinterval is determined by the variation of the integrand, to calculate the scattering angle. The collision integrals are calculated by using Romberg’s method, therefore the accuracy is high (i.e.,more » ∼10{sup −12}). The results of collision integrals and their derivatives for −7 ≤ ψ ≤ 5 are listed. By using Hermite polynomial interpolation from those data, the collision integrals can be obtained with an accuracy of 10{sup −10}. For very weakly coupled plasma ( ψ ≥ 4.5), analytical fittings for collision integrals are available with an accuracy of 10{sup −11}. I have compared the final results of resistance coefficients with other works and found that, for a repulsive potential, the results are basically the same as others’; for an attractive potential, the results in cases of intermediate and strong coupling show significant differences. The resulting resistance coefficients are tested in the solar model. Comparing with the widely used models of Cox et al. and Thoul et al., the resistance coefficients in the screened Coulomb potential lead to a slightly weaker effect in the solar model, which is contrary to the expectation of attempts to solve the solar abundance problem.« less

  14. Designing Adaptive Low-Dissipative High Order Schemes for Long-Time Integrations. Chapter 1

    NASA Technical Reports Server (NTRS)

    Yee, Helen C.; Sjoegreen, B.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A general framework for the design of adaptive low-dissipative high order schemes is presented. It encompasses a rather complete treatment of the numerical approach based on four integrated design criteria: (1) For stability considerations, condition the governing equations before the application of the appropriate numerical scheme whenever it is possible; (2) For consistency, compatible schemes that possess stability properties, including physical and numerical boundary condition treatments, similar to those of the discrete analogue of the continuum are preferred; (3) For the minimization of numerical dissipation contamination, efficient and adaptive numerical dissipation control to further improve nonlinear stability and accuracy should be used; and (4) For practical considerations, the numerical approach should be efficient and applicable to general geometries, and an efficient and reliable dynamic grid adaptation should be used if necessary. These design criteria are, in general, very useful to a wide spectrum of flow simulations. However, the demand on the overall numerical approach for nonlinear stability and accuracy is much more stringent for long-time integration of complex multiscale viscous shock/shear/turbulence/acoustics interactions and numerical combustion. Robust classical numerical methods for less complex flow physics are not suitable or practical for such applications. The present approach is designed expressly to address such flow problems, especially unsteady flows. The minimization of employing very fine grids to overcome the production of spurious numerical solutions and/or instability due to under-resolved grids is also sought. The incremental studies to illustrate the performance of the approach are summarized. Extensive testing and full implementation of the approach is forthcoming. The results shown so far are very encouraging.

  15. Improved accuracy for finite element structural analysis via a new integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  16. Numerical integration and optimization of motions for multibody dynamic systems

    NASA Astrophysics Data System (ADS)

    Aguilar Mayans, Joan

    This thesis considers the optimization and simulation of motions involving rigid body systems. It does so in three distinct parts, with the following topics: optimization and analysis of human high-diving motions, efficient numerical integration of rigid body dynamics with contacts, and motion optimization of a two-link robot arm using Finite-Time Lyapunov Analysis. The first part introduces the concept of eigenpostures, which we use to simulate and analyze human high-diving motions. Eigenpostures are used in two different ways: first, to reduce the complexity of the optimal control problem that we solve to obtain such motions, and second, to generate an eigenposture space to which we map existing real world motions to better analyze them. The benefits of using eigenpostures are showcased through different examples. The second part reviews an extensive list of integration algorithms used for the integration of rigid body dynamics. We analyze the accuracy and stability of the different integrators in the three-dimensional space and the rotation space SO(3). Integrators with an accuracy higher than first order perform more efficiently than integrators with first order accuracy, even in the presence of contacts. The third part uses Finite-time Lyapunov Analysis to optimize motions for a two-link robot arm. Finite-Time Lyapunov Analysis diagnoses the presence of time-scale separation in the dynamics of the optimized motion and provides the information and methodology for obtaining an accurate approximation to the optimal solution, avoiding the complications that timescale separation causes for alternative solution methods.

  17. A fast algorithm for forward-modeling of gravitational fields in spherical coordinates with 3D Gauss-Legendre quadrature

    NASA Astrophysics Data System (ADS)

    Zhao, G.; Liu, J.; Chen, B.; Guo, R.; Chen, L.

    2017-12-01

    Forward modeling of gravitational fields at large-scale requires to consider the curvature of the Earth and to evaluate the Newton's volume integral in spherical coordinates. To acquire fast and accurate gravitational effects for subsurface structures, subsurface mass distribution is usually discretized into small spherical prisms (called tesseroids). The gravity fields of tesseroids are generally calculated numerically. One of the commonly used numerical methods is the 3D Gauss-Legendre quadrature (GLQ). However, the traditional GLQ integration suffers from low computational efficiency and relatively poor accuracy when the observation surface is close to the source region. We developed a fast and high accuracy 3D GLQ integration based on the equivalence of kernel matrix, adaptive discretization and parallelization using OpenMP. The equivalence of kernel matrix strategy increases efficiency and reduces memory consumption by calculating and storing the same matrix elements in each kernel matrix just one time. In this method, the adaptive discretization strategy is used to improve the accuracy. The numerical investigations show that the executing time of the proposed method is reduced by two orders of magnitude compared with the traditional method that without these optimized strategies. High accuracy results can also be guaranteed no matter how close the computation points to the source region. In addition, the algorithm dramatically reduces the memory requirement by N times compared with the traditional method, where N is the number of discretization of the source region in the longitudinal direction. It makes the large-scale gravity forward modeling and inversion with a fine discretization possible.

  18. Error in the determination of the deformed shape of prismatic beams using the double integration of curvature

    NASA Astrophysics Data System (ADS)

    Sigurdardottir, Dorotea H.; Stearns, Jett; Glisic, Branko

    2017-07-01

    The deformed shape is a consequence of loading the structure and it is defined by the shape of the centroid line of the beam after deformation. The deformed shape is a universal parameter of beam-like structures. It is correlated with the curvature of the cross-section; therefore, any unusual behavior that affects the curvature is reflected through the deformed shape. Excessive deformations cause user discomfort, damage to adjacent structural members, and may ultimately lead to issues in structural safety. However, direct long-term monitoring of the deformed shape in real-life settings is challenging, and an alternative is indirect determination of the deformed shape based on curvature monitoring. The challenge of the latter is an accurate evaluation of error in the deformed shape determination, which is directly correlated with the number of sensors needed to achieve the desired accuracy. The aim of this paper is to study the deformed shape evaluated by numerical double integration of the monitored curvature distribution along the beam, and create a method to predict the associated errors and suggest the number of sensors needed to achieve the desired accuracy. The error due to the accuracy in the curvature measurement is evaluated within the scope of this work. Additionally, the error due to the numerical integration is evaluated. This error depends on the load case (i.e., the shape of the curvature diagram), the magnitude of curvature, and the density of the sensor network. The method is tested on a laboratory specimen and a real structure. In a laboratory setting, the double integration is in excellent agreement with the beam theory solution which was within the predicted error limits of the numerical integration. Consistent results are also achieved on a real structure—Streicker Bridge on Princeton University campus.

  19. Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm

    NASA Technical Reports Server (NTRS)

    Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.

    1991-01-01

    The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.

  20. Numerical analysis of the asymptotic two-point boundary value solution for N-body trajectories.

    NASA Technical Reports Server (NTRS)

    Lancaster, J. E.; Allemann, R. A.

    1972-01-01

    Previously published asymptotic solutions for lunar and interplanetary trajectories have been modified and combined to formulate a general analytical boundary value solution applicable to a broad class of trajectory problems. In addition, the earlier first-order solutions have been extended to second-order to determine if improved accuracy is possible. Comparisons between the asymptotic solution and numerical integration for several lunar and interplanetary trajectories show that the asymptotic solution is generally quite accurate. Also, since no iterations are required, a solution to the boundary value problem is obtained in a fraction of the time required for numerically integrated solutions.

  1. Design, qualification, manufacturing and integration of IXV Ablative Thermal Protection System

    NASA Astrophysics Data System (ADS)

    Cioeta, Mario; Di Vita, Gandolfo; Signorelli Maria, Teresa; Bianco, Gianluca; Cutroni, Maurizio; Damiani, Francesco; Ferretti, Viviana; Rotondo, Adriano

    2016-07-01

    In the present paper, all the activities carried out by Avio S.p.A in order to define, qualify, manufacture and integrate the IXV Ablative TPS will be presented. In particular the extensive numerical simulation in both small and full scale testing activities will be overviewed. Wide-ranging testing activity has been carried out in order to verify, confirm and correlate the numerical models used for TPS sizing. Tests ranged from classical thermo-mechanical characterization traction specimens to tests in plasma wind tunnels on dedicated prototypes. Finally manufacturing and integration activities will be described emphasizing technological aspects solved in order to meet the stringent requirements in terms of shape accuracy and integration tolerances.

  2. Solving ODE Initial Value Problems With Implicit Taylor Series Methods

    NASA Technical Reports Server (NTRS)

    Scott, James R.

    2000-01-01

    In this paper we introduce a new class of numerical methods for integrating ODE initial value problems. Specifically, we propose an extension of the Taylor series method which significantly improves its accuracy and stability while also increasing its range of applicability. To advance the solution from t (sub n) to t (sub n+1), we expand a series about the intermediate point t (sub n+mu):=t (sub n) + mu h, where h is the stepsize and mu is an arbitrary parameter called an expansion coefficient. We show that, in general, a Taylor series of degree k has exactly k expansion coefficients which raise its order of accuracy. The accuracy is raised by one order if k is odd, and by two orders if k is even. In addition, if k is three or greater, local extrapolation can be used to raise the accuracy two additional orders. We also examine stability for the problem y'= lambda y, Re (lambda) less than 0, and identify several A-stable schemes. Numerical results are presented for both fixed and variable stepsizes. It is shown that implicit Taylor series methods provide an effective integration tool for most problems, including stiff systems and ODE's with a singular point.

  3. Using high-order polynomial basis in 3-D EM forward modeling based on volume integral equation method

    NASA Astrophysics Data System (ADS)

    Kruglyakov, Mikhail; Kuvshinov, Alexey

    2018-05-01

    3-D interpretation of electromagnetic (EM) data of different origin and scale becomes a common practice worldwide. However, 3-D EM numerical simulations (modeling)—a key part of any 3-D EM data analysis—with realistic levels of complexity, accuracy and spatial detail still remains challenging from the computational point of view. We present a novel, efficient 3-D numerical solver based on a volume integral equation (IE) method. The efficiency is achieved by using a high-order polynomial (HOP) basis instead of the zero-order (piecewise constant) basis that is invoked in all routinely used IE-based solvers. We demonstrate that usage of the HOP basis allows us to decrease substantially the number of unknowns (preserving the same accuracy), with corresponding speed increase and memory saving.

  4. Exact analytical formulae for linearly distributed vortex and source sheets in uence computation in 2D vortex methods

    NASA Astrophysics Data System (ADS)

    Kuzmina, K. S.; Marchevsky, I. K.; Ryatina, E. P.

    2017-11-01

    We consider the methodology of numerical schemes development for two-dimensional vortex method. We describe two different approaches to deriving integral equation for unknown vortex sheet intensity. We simulate the velocity of the surface line of an airfoil as the influence of attached vortex and source sheets. We consider a polygonal approximation of the airfoil and assume intensity distributions of free and attached vortex sheets and attached source sheet to be approximated with piecewise constant or piecewise linear (continuous or discontinuous) functions. We describe several specific numerical schemes that provide different accuracy and have a different computational cost. The study shows that a Galerkin-type approach to solving boundary integral equation requires computing several integrals and double integrals over the panels. We obtain exact analytical formulae for all the necessary integrals, which makes it possible to raise significantly the accuracy of vortex sheet intensity computation and improve the quality of velocity and vorticity field representation, especially in proximity to the surface line of the airfoil. All the formulae are written down in the invariant form and depend only on the geometric relationship between the positions of the beginnings and ends of the panels.

  5. A numerical method for interface problems in elastodynamics

    NASA Technical Reports Server (NTRS)

    Mcghee, D. S.

    1984-01-01

    The numerical implementation of a formulation for a class of interface problems in elastodynamics is discussed. This formulation combines the use of the finite element and boundary integral methods to represent the interior and the exteriro regions, respectively. In particular, the response of a semicylindrical alluvial valley in a homogeneous halfspace to incident antiplane SH waves is considered to determine the accuracy and convergence of the numerical procedure. Numerical results are obtained from several combinations of the incidence angle, frequency of excitation, and relative stiffness between the inclusion and the surrounding halfspace. The results tend to confirm the theoretical estimates that the convergence is of the order H(2) for the piecewise linear elements used. It was also observed that the accuracy descreases as the frequency of excitation increases or as the relative stiffness of the inclusion decreases.

  6. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  7. The gravitational potential of axially symmetric bodies from a regularized green kernel

    NASA Astrophysics Data System (ADS)

    Trova, A.; Huré, J.-M.; Hersant, F.

    2011-12-01

    The determination of the gravitational potential inside celestial bodies (rotating stars, discs, planets, asteroids) is a common challenge in numerical Astrophysics. Under axial symmetry, the potential is classically found from a two-dimensional integral over the body's meridional cross-section. Because it involves an improper integral, high accuracy is generally difficult to reach. We have discovered that, for homogeneous bodies, the singular Green kernel can be converted into a regular kernel by direct analytical integration. This new kernel, easily managed with standard techniques, opens interesting horizons, not only for numerical calculus but also to generate approximations, in particular for geometrically thin discs and rings.

  8. On time discretizations for spectral methods. [numerical integration of Fourier and Chebyshev methods for dynamic partial differential equations

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Turkel, E.

    1980-01-01

    New methods are introduced for the time integration of the Fourier and Chebyshev methods of solution for dynamic differential equations. These methods are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier method both leapfrog and Runge-Kutta methods are considered. For the Chebyshev method only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.

  9. The elimination of influence of disturbing bodies' coordinates and derivatives discontinuity on the accuracy of asteroid motion simulation

    NASA Astrophysics Data System (ADS)

    Baturin, A. P.; Votchel, I. A.

    2013-12-01

    The problem of asteroid motion sumulation has been considered. At present this simulation is being performed by means of numerical integration taking into account the pertubations from planets and the Moon with some their ephemerides (DE405, DE422, etc.). All these ephemerides contain coefficients for Chebyshev polinomials for the great amount of equal interpolation intervals. However, all ephemerides has been constructed to keep at the junctions of adjacent intervals a continuity of just coordinates and their first derivatives (just in 16-digit decimal format corre-sponding to 64-bit floating-point numbers). But as for the second and higher order derivatives, they have breaks at these junctions. These breaks, if they are within an integration step, decrease the accuracy of numerical integration. If to consider 34-digit format (128-bit floating point numbers) the coordinates and their first derivatives will also have breaks (at 15-16 decimal digit) at interpolation intervals' junctions. Two ways of elimination of influence of such breaks have been considered. The first one is a "smoothing" of ephemerides so that planets' coordinates and their de-rivatives up to some order will be continuous at the junctions. The smoothing algorithm is based on conditional least-square fitting of coefficients for Chebyshev polynomials, the conditions are equalities of coordinates and derivatives up to some order "from the left" and "from the right" at the each junction. The algorithm has been applied for the smoothing of ephemerides DE430 just up to the first-order derivatives. The second way is a correction of integration step so that junctions does not lie within the step and always coincide with its end. But this way may be applied just at 16-digit decimal precision because it assumes a continuity of planets' coordinates and their first derivatives. Both ways was applied in forward and backward numerical integration for asteroids Apophis and 2012 DA14 by means of 15- and 31-order Everhart method at 16- and 34-digit decimal precision correspondently. The ephemerides DE430 (in its original and smoothed form) has been used for the calculation of perturbations. The results of the research indicate that the integration step correction increases a numercal integration accuracy by 3-4 orders. If, in addition, to replace the original ephemerides by the smoothed ones the accuracy increases approximately by 10 orders.

  10. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  11. Comparison of Several Numerical Methods for Simulation of Compressible Shear Layers

    NASA Technical Reports Server (NTRS)

    Kennedy, Christopher A.; Carpenter, Mark H.

    1997-01-01

    An investigation is conducted on several numerical schemes for use in the computation of two-dimensional, spatially evolving, laminar variable-density compressible shear layers. Schemes with various temporal accuracies and arbitrary spatial accuracy for both inviscid and viscous terms are presented and analyzed. All integration schemes use explicit or compact finite-difference derivative operators. Three classes of schemes are considered: an extension of MacCormack's original second-order temporally accurate method, a new third-order variant of the schemes proposed by Rusanov and by Kutier, Lomax, and Warming (RKLW), and third- and fourth-order Runge-Kutta schemes. In each scheme, stability and formal accuracy are considered for the interior operators on the convection-diffusion equation U(sub t) + aU(sub x) = alpha U(sub xx). Accuracy is also verified on the nonlinear problem, U(sub t) + F(sub x) = 0. Numerical treatments of various orders of accuracy are chosen and evaluated for asymptotic stability. Formally accurate boundary conditions are derived for several sixth- and eighth-order central-difference schemes. Damping of high wave-number data is accomplished with explicit filters of arbitrary order. Several schemes are used to compute variable-density compressible shear layers, where regions of large gradients exist.

  12. Numerical integration of discontinuous functions: moment fitting and smart octree

    NASA Astrophysics Data System (ADS)

    Hubrich, Simeon; Di Stolfo, Paolo; Kudela, László; Kollmannsberger, Stefan; Rank, Ernst; Schröder, Andreas; Düster, Alexander

    2017-11-01

    A fast and simple grid generation can be achieved by non-standard discretization methods where the mesh does not conform to the boundary or the internal interfaces of the problem. However, this simplification leads to discontinuous integrands for intersected elements and, therefore, standard quadrature rules do not perform well anymore. Consequently, special methods are required for the numerical integration. To this end, we present two approaches to obtain quadrature rules for arbitrary domains. The first approach is based on an extension of the moment fitting method combined with an optimization strategy for the position and weights of the quadrature points. In the second approach, we apply the smart octree, which generates curved sub-cells for the integration mesh. To demonstrate the performance of the proposed methods, we consider several numerical examples, showing that the methods lead to efficient quadrature rules, resulting in less integration points and in high accuracy.

  13. Buckling Of Shells Of Revolution /BOSOR/ with various wall constructions

    NASA Technical Reports Server (NTRS)

    Almroth, B. O.; Bushnell, D.; Sobel, L. H.

    1969-01-01

    Computer program, using numerical integration and finite difference techniques, solves almost any buckling problem for shells exhibiting orthotropic behavior. Stability analyses can be performed with reasonable accuracy and without unduly restrictive approximations.

  14. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  15. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    NASA Astrophysics Data System (ADS)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.

    2010-10-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.

  16. Application of matched asymptotic expansions to lunar and interplanetary trajectories. Volume 1: Technical discussion

    NASA Technical Reports Server (NTRS)

    Lancaster, J. E.

    1973-01-01

    Previously published asymptotic solutions for lunar and interplanetary trajectories have been modified and combined to formulate a general analytical solution to the problem on N-bodies. The earlier first-order solutions, derived by the method of matched asymptotic expansions, have been extended to second order for the purpose of obtaining increased accuracy. The derivation of the second-order solution is summarized by showing the essential steps, some in functional form. The general asymptotic solution has been used as a basis for formulating a number of analytical two-point boundary value solutions. These include earth-to-moon, one- and two-impulse moon-to-earth, and interplanetary solutions. The results show that the accuracies of the asymptotic solutions range from an order of magnitude better than conic approximations to that of numerical integration itself. Also, since no iterations are required, the asymptotic boundary value solutions are obtained in a fraction of the time required for comparable numerically integrated solutions. The subject of minimizing the second-order error is discussed, and recommendations made for further work directed toward achieving a uniform accuracy in all applications.

  17. Finite element implementation of state variable-based viscoplasticity models

    NASA Technical Reports Server (NTRS)

    Iskovitz, I.; Chang, T. Y. P.; Saleeb, A. F.

    1991-01-01

    The implementation of state variable-based viscoplasticity models is made in a general purpose finite element code for structural applications of metals deformed at elevated temperatures. Two constitutive models, Walker's and Robinson's models, are studied in conjunction with two implicit integration methods: the trapezoidal rule with Newton-Raphson iterations and an asymptotic integration algorithm. A comparison is made between the two integration methods, and the latter method appears to be computationally more appealing in terms of numerical accuracy and CPU time. However, in order to make the asymptotic algorithm robust, it is necessary to include a self adaptive scheme with subincremental step control and error checking of the Jacobian matrix at the integration points. Three examples are given to illustrate the numerical aspects of the integration methods tested.

  18. Geometric integration in Born-Oppenheimer molecular dynamics.

    PubMed

    Odell, Anders; Delin, Anna; Johansson, Börje; Cawkwell, Marc J; Niklasson, Anders M N

    2011-12-14

    Geometric integration schemes for extended Lagrangian self-consistent Born-Oppenheimer molecular dynamics, including a weak dissipation to remove numerical noise, are developed and analyzed. The extended Lagrangian framework enables the geometric integration of both the nuclear and electronic degrees of freedom. This provides highly efficient simulations that are stable and energy conserving even under incomplete and approximate self-consistent field (SCF) convergence. We investigate three different geometric integration schemes: (1) regular time reversible Verlet, (2) second order optimal symplectic, and (3) third order optimal symplectic. We look at energy conservation, accuracy, and stability as a function of dissipation, integration time step, and SCF convergence. We find that the inclusion of dissipation in the symplectic integration methods gives an efficient damping of numerical noise or perturbations that otherwise may accumulate from finite arithmetics in a perfect reversible dynamics. © 2011 American Institute of Physics

  19. On finite element implementation and computational techniques for constitutive modeling of high temperature composites

    NASA Technical Reports Server (NTRS)

    Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.

    1989-01-01

    The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.

  20. ICASE Semiannual Report, October 1, 1992 through March 31, 1993

    DTIC Science & Technology

    1993-06-01

    NUMERICAL MATHEMATICS Saul Abarbanel Further results have been obtained regarding long time integration of high order compact finite difference schemes...overall accuracy. These problems are common to all numerical methods: finite differences , finite elements and spectral methods. It should be noted that...fourth order finite difference scheme. * In the same case, the D6 wavelets provide a sixth order finite difference , noncompact formula. * The wavelets

  1. Comparative assessment of orthogonal polynomials for wavefront reconstruction over the square aperture.

    PubMed

    Ye, Jingfei; Gao, Zhishan; Wang, Shuai; Cheng, Jinlong; Wang, Wei; Sun, Wenqing

    2014-10-01

    Four orthogonal polynomials for reconstructing a wavefront over a square aperture based on the modal method are currently available, namely, the 2D Chebyshev polynomials, 2D Legendre polynomials, Zernike square polynomials and Numerical polynomials. They are all orthogonal over the full unit square domain. 2D Chebyshev polynomials are defined by the product of Chebyshev polynomials in x and y variables, as are 2D Legendre polynomials. Zernike square polynomials are derived by the Gram-Schmidt orthogonalization process, where the integration region across the full unit square is circumscribed outside the unit circle. Numerical polynomials are obtained by numerical calculation. The presented study is to compare these four orthogonal polynomials by theoretical analysis and numerical experiments from the aspects of reconstruction accuracy, remaining errors, and robustness. Results show that the Numerical orthogonal polynomial is superior to the other three polynomials because of its high accuracy and robustness even in the case of a wavefront with incomplete data.

  2. A Gaussian quadrature method for total energy analysis in electronic state calculations

    NASA Astrophysics Data System (ADS)

    Fukushima, Kimichika

    This article reports studies by Fukushima and coworkers since 1980 concerning their highly accurate numerical integral method using Gaussian quadratures to evaluate the total energy in electronic state calculations. Gauss-Legendre and Gauss-Laguerre quadratures were used for integrals in the finite and infinite regions, respectively. Our previous article showed that, for diatomic molecules such as CO and FeO, elliptic coordinates efficiently achieved high numerical integral accuracy even with a numerical basis set including transition metal atomic orbitals. This article will generalize straightforward details for multiatomic systems with direct integrals in each decomposed elliptic coordinate determined from the nuclear positions of picked-up atom pairs. Sample calculations were performed for the molecules O3 and H2O. This article will also try to present, in another coordinate, a numerical integral by partially using the Becke's decomposition published in 1988, but without the Becke's fuzzy cell generated by the polynomials of internuclear distance between the pair atoms. Instead, simple nuclear weights comprising exponential functions around nuclei are used. The one-center integral is performed with a Gaussian quadrature pack in a spherical coordinate, included in the author's original program in around 1980. As for this decomposition into one-center integrals, sample calculations are carried out for Li2.

  3. Time-splitting combined with exponential wave integrator fourier pseudospectral method for Schrödinger-Boussinesq system

    NASA Astrophysics Data System (ADS)

    Liao, Feng; Zhang, Luming; Wang, Shanshan

    2018-02-01

    In this article, we formulate an efficient and accurate numerical method for approximations of the coupled Schrödinger-Boussinesq (SBq) system. The main features of our method are based on: (i) the applications of a time-splitting Fourier spectral method for Schrödinger-like equation in SBq system, (ii) the utilizations of exponential wave integrator Fourier pseudospectral for spatial derivatives in the Boussinesq-like equation. The scheme is fully explicit and efficient due to fast Fourier transform. The numerical examples are presented to show the efficiency and accuracy of our method.

  4. QIKAIM, a fast seminumerical algorithm for the generation of minute-of-arc accuracy satellite predictions

    NASA Astrophysics Data System (ADS)

    Vermeer, M.

    1981-07-01

    A program was designed to replace AIMLASER for the generation of aiming predictions, to achieve a major saving in computing time, and to keep the program small enough for use even on small systems. An approach was adopted that incorporated the numerical integration of the orbit through a pass, limiting the computation of osculating elements to only one point per pass. The numerical integration method which is fourth order in delta t in the cumulative error after a given time lapse is presented. Algorithms are explained and a flowchart and listing of the program are provided.

  5. On the Minimal Accuracy Required for Simulating Self-gravitating Systems by Means of Direct N-body Methods

    NASA Astrophysics Data System (ADS)

    Portegies Zwart, Simon; Boekholt, Tjarda

    2014-04-01

    The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-body interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.

  6. Windowed Green function method for the Helmholtz equation in the presence of multiply layered media

    NASA Astrophysics Data System (ADS)

    Bruno, O. P.; Pérez-Arancibia, C.

    2017-06-01

    This paper presents a new methodology for the solution of problems of two- and three-dimensional acoustic scattering (and, in particular, two-dimensional electromagnetic scattering) by obstacles and defects in the presence of an arbitrary number of penetrable layers. Relying on the use of certain slow-rise windowing functions, the proposed windowed Green function approach efficiently evaluates oscillatory integrals over unbounded domains, with high accuracy, without recourse to the highly expensive Sommerfeld integrals that have typically been used to account for the effect of underlying planar multilayer structures. The proposed methodology, whose theoretical basis was presented in the recent contribution (Bruno et al. 2016 SIAM J. Appl. Math. 76, 1871-1898. (doi:10.1137/15M1033782)), is fast, accurate, flexible and easy to implement. Our numerical experiments demonstrate that the numerical errors resulting from the proposed approach decrease faster than any negative power of the window size. In a number of examples considered in this paper, the proposed method is up to thousands of times faster, for a given accuracy, than corresponding methods based on the use of Sommerfeld integrals.

  7. Windowed Green function method for the Helmholtz equation in the presence of multiply layered media.

    PubMed

    Bruno, O P; Pérez-Arancibia, C

    2017-06-01

    This paper presents a new methodology for the solution of problems of two- and three-dimensional acoustic scattering (and, in particular, two-dimensional electromagnetic scattering) by obstacles and defects in the presence of an arbitrary number of penetrable layers. Relying on the use of certain slow-rise windowing functions, the proposed windowed Green function approach efficiently evaluates oscillatory integrals over unbounded domains, with high accuracy, without recourse to the highly expensive Sommerfeld integrals that have typically been used to account for the effect of underlying planar multilayer structures. The proposed methodology, whose theoretical basis was presented in the recent contribution (Bruno et al. 2016 SIAM J. Appl. Math. 76 , 1871-1898. (doi:10.1137/15M1033782)), is fast, accurate, flexible and easy to implement. Our numerical experiments demonstrate that the numerical errors resulting from the proposed approach decrease faster than any negative power of the window size. In a number of examples considered in this paper, the proposed method is up to thousands of times faster, for a given accuracy, than corresponding methods based on the use of Sommerfeld integrals.

  8. A Novel Polygonal Finite Element Method: Virtual Node Method

    NASA Astrophysics Data System (ADS)

    Tang, X. H.; Zheng, C.; Zhang, J. H.

    2010-05-01

    Polygonal finite element method (PFEM), which can construct shape functions on polygonal elements, provides greater flexibility in mesh generation. However, the non-polynomial form of traditional PFEM, such as Wachspress method and Mean Value method, leads to inexact numerical integration. Since the integration technique for non-polynomial functions is immature. To overcome this shortcoming, a great number of integration points have to be used to obtain sufficiently exact results, which increases computational cost. In this paper, a novel polygonal finite element method is proposed and called as virtual node method (VNM). The features of present method can be list as: (1) It is a PFEM with polynomial form. Thereby, Hammer integral and Gauss integral can be naturally used to obtain exact numerical integration; (2) Shape functions of VNM satisfy all the requirements of finite element method. To test the performance of VNM, intensive numerical tests are carried out. It found that, in standard patch test, VNM can achieve significantly better results than Wachspress method and Mean Value method. Moreover, it is observed that VNM can achieve better results than triangular 3-node elements in the accuracy test.

  9. A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System.

    PubMed

    Yu, Fei; Lv, Chongyang; Dong, Qianhui

    2016-03-18

    Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter.

  10. Preserving Simplecticity in the Numerical Integration of Linear Beam Optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, Christopher K.

    2017-07-01

    Presented are mathematical tools and methods for the development of numerical integration techniques that preserve the symplectic condition inherent to mechanics. The intended audience is for beam physicists with backgrounds in numerical modeling and simulation with particular attention to beam optics applications. The paper focuses on Lie methods that are inherently symplectic regardless of the integration accuracy order. Section 2 provides the mathematically tools used in the sequel and necessary for the reader to extend the covered techniques. Section 3 places those tools in the context of charged-particle beam optics; in particular linear beam optics is presented in terms ofmore » a Lie algebraic matrix representation. Section 4 presents numerical stepping techniques with particular emphasis on a third-order leapfrog method. Section 5 discusses the modeling of field imperfections with particular attention to the fringe fields of quadrupole focusing magnets. The direct computation of a third order transfer matrix for a fringe field is shown.« less

  11. A study of the displacement of a Wankel rotary engine

    NASA Astrophysics Data System (ADS)

    Beard, J. E.; Pennock, G. R.

    1993-03-01

    The volumetric displacement of a Wankel rotary engine is a function of the trochoid ratio and the pin size ratio, assuming the engine has a unit depth and the number of lobes is specified. The mathematical expression which defines the displacement contains a function which can be evaluated directly and a normal elliptic integral of the second type which does not have an explicit solution. This paper focuses on the contribution of the elliptic integral to the total displacement of the engine. The influence of the elliptic integral is shown to account for as much as 20 percent of the total displacement, depending on the trochoid ratio and the pin size ratio. Two numerical integration techniques are compared in the paper, namely, the trapezoidal rule and Simpson's 1/3 rule. The bounds on the error, associated with each numerical method, are analyzed. The results indicate that the numerical method has a minimal effect on the accuracy of the calculated displacement for a practical number of integration steps. The paper also evaluates the influence of manufacturing tolerances on the calculated displacement and the actual displacement. Finally. a numerical example of the common three-lobed Wankel rotary engine is included for illustrative purposes.

  12. The numerical calculation of laminar boundary-layer separation

    NASA Technical Reports Server (NTRS)

    Klineberg, J. M.; Steger, J. L.

    1974-01-01

    Iterative finite-difference techniques are developed for integrating the boundary-layer equations, without approximation, through a region of reversed flow. The numerical procedures are used to calculate incompressible laminar separated flows and to investigate the conditions for regular behavior at the point of separation. Regular flows are shown to be characterized by an integrable saddle-type singularity that makes it difficult to obtain numerical solutions which pass continuously into the separated region. The singularity is removed and continuous solutions ensured by specifying the wall shear distribution and computing the pressure gradient as part of the solution. Calculated results are presented for several separated flows and the accuracy of the method is verified. A computer program listing and complete solution case are included.

  13. KAM Tori Construction Algorithms

    NASA Astrophysics Data System (ADS)

    Wiesel, W.

    In this paper we evaluate and compare two algorithms for the calculation of KAM tori in Hamiltonian systems. The direct fitting of a torus Fourier series to a numerically integrated trajectory is the first method, while an accelerated finite Fourier transform is the second method. The finite Fourier transform, with Hanning window functions, is by far superior in both computational loading and numerical accuracy. Some thoughts on applications of KAM tori are offered.

  14. Regularization in Orbital Mechanics; Theory and Practice

    NASA Astrophysics Data System (ADS)

    Roa, Javier

    2017-09-01

    Regularized equations of motion can improve numerical integration for the propagation of orbits, and simplify the treatment of mission design problems. This monograph discusses standard techniques and recent research in the area. While each scheme is derived analytically, its accuracy is investigated numerically. Algebraic and topological aspects of the formulations are studied, as well as their application to practical scenarios such as spacecraft relative motion and new low-thrust trajectories.

  15. MS overline -on-shell quark mass relation up to four loops in QCD and a general SU (N ) gauge group

    NASA Astrophysics Data System (ADS)

    Marquard, Peter; Smirnov, Alexander V.; Smirnov, Vladimir A.; Steinhauser, Matthias; Wellmann, David

    2016-10-01

    We compute the relation between heavy quark masses defined in the modified minimal subtraction and the on-shell schemes. Detailed results are presented for all coefficients of the SU (Nc) color factors. The reduction of the four-loop on-shell integrals is performed for a general QCD gauge parameter. Altogether there are about 380 master integrals. Some of them are computed analytically, others with high numerical precision using Mellin-Barnes representations, and the rest numerically with the help of FIESTA. We discuss in detail the precise numerical evaluation of the four-loop master integrals. Updated relations between various short-distance masses and the MS ¯ quark mass to next-to-next-to-next-to-leading order accuracy are provided for the charm, bottom and top quarks. We discuss the dependence on the renormalization and factorization scale.

  16. Standard deviation of scatterometer measurements from space.

    NASA Technical Reports Server (NTRS)

    Fischer, R. E.

    1972-01-01

    The standard deviation of scatterometer measurements has been derived under assumptions applicable to spaceborne scatterometers. Numerical results are presented which show that, with sufficiently long integration times, input signal-to-noise ratios below unity do not cause excessive degradation of measurement accuracy. The effects on measurement accuracy due to varying integration times and changing the ratio of signal bandwidth to IF filter-noise bandwidth are also plotted. The results of the analysis may resolve a controversy by showing that in fact statistically useful scatterometer measurements can be made from space using a 20-W transmitter, such as will be used on the S-193 experiment for Skylab-A.

  17. New Langevin and gradient thermostats for rigid body dynamics.

    PubMed

    Davidchack, R L; Ouldridge, T E; Tretyakov, M V

    2015-04-14

    We introduce two new thermostats, one of Langevin type and one of gradient (Brownian) type, for rigid body dynamics. We formulate rotation using the quaternion representation of angular coordinates; both thermostats preserve the unit length of quaternions. The Langevin thermostat also ensures that the conjugate angular momenta stay within the tangent space of the quaternion coordinates, as required by the Hamiltonian dynamics of rigid bodies. We have constructed three geometric numerical integrators for the Langevin thermostat and one for the gradient thermostat. The numerical integrators reflect key properties of the thermostats themselves. Namely, they all preserve the unit length of quaternions, automatically, without the need of a projection onto the unit sphere. The Langevin integrators also ensure that the angular momenta remain within the tangent space of the quaternion coordinates. The Langevin integrators are quasi-symplectic and of weak order two. The numerical method for the gradient thermostat is of weak order one. Its construction exploits ideas of Lie-group type integrators for differential equations on manifolds. We numerically compare the discretization errors of the Langevin integrators, as well as the efficiency of the gradient integrator compared to the Langevin ones when used in the simulation of rigid TIP4P water model with smoothly truncated electrostatic interactions. We observe that the gradient integrator is computationally less efficient than the Langevin integrators. We also compare the relative accuracy of the Langevin integrators in evaluating various static quantities and give recommendations as to the choice of an appropriate integrator.

  18. Investigation of smoothness-increasing accuracy-conserving filters for improving streamline integration through discontinuous fields.

    PubMed

    Steffen, Michael; Curtis, Sean; Kirby, Robert M; Ryan, Jennifer K

    2008-01-01

    Streamline integration of fields produced by computational fluid mechanics simulations is a commonly used tool for the investigation and analysis of fluid flow phenomena. Integration is often accomplished through the application of ordinary differential equation (ODE) integrators--integrators whose error characteristics are predicated on the smoothness of the field through which the streamline is being integrated--smoothness which is not available at the inter-element level of finite volume and finite element data. Adaptive error control techniques are often used to ameliorate the challenge posed by inter-element discontinuities. As the root of the difficulties is the discontinuous nature of the data, we present a complementary approach of applying smoothness-enhancing accuracy-conserving filters to the data prior to streamline integration. We investigate whether such an approach applied to uniform quadrilateral discontinuous Galerkin (high-order finite volume) data can be used to augment current adaptive error control approaches. We discuss and demonstrate through numerical example the computational trade-offs exhibited when one applies such a strategy.

  19. Adaptive estimation of state of charge and capacity with online identified battery model for vanadium redox flow battery

    NASA Astrophysics Data System (ADS)

    Wei, Zhongbao; Tseng, King Jet; Wai, Nyunt; Lim, Tuti Mariana; Skyllas-Kazacos, Maria

    2016-11-01

    Reliable state estimate depends largely on an accurate battery model. However, the parameters of battery model are time varying with operating condition variation and battery aging. The existing co-estimation methods address the model uncertainty by integrating the online model identification with state estimate and have shown improved accuracy. However, the cross interference may arise from the integrated framework to compromise numerical stability and accuracy. Thus this paper proposes the decoupling of model identification and state estimate to eliminate the possibility of cross interference. The model parameters are online adapted with the recursive least squares (RLS) method, based on which a novel joint estimator based on extended Kalman Filter (EKF) is formulated to estimate the state of charge (SOC) and capacity concurrently. The proposed joint estimator effectively compresses the filter order which leads to substantial improvement in the computational efficiency and numerical stability. Lab scale experiment on vanadium redox flow battery shows that the proposed method is highly authentic with good robustness to varying operating conditions and battery aging. The proposed method is further compared with some existing methods and shown to be superior in terms of accuracy, convergence speed, and computational cost.

  20. Trajectory errors of different numerical integration schemes diagnosed with the MPTRAC advection module driven by ECMWF operational analyses

    NASA Astrophysics Data System (ADS)

    Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars

    2018-02-01

    The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.

  1. Implicit time accurate simulation of unsteady flow

    NASA Astrophysics Data System (ADS)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  2. Numerical Solutions of the Nonlinear Fractional-Order Brusselator System by Bernstein Polynomials

    PubMed Central

    Khan, Rahmat Ali; Tajadodi, Haleh; Johnston, Sarah Jane

    2014-01-01

    In this paper we propose the Bernstein polynomials to achieve the numerical solutions of nonlinear fractional-order chaotic system known by fractional-order Brusselator system. We use operational matrices of fractional integration and multiplication of Bernstein polynomials, which turns the nonlinear fractional-order Brusselator system to a system of algebraic equations. Two illustrative examples are given in order to demonstrate the accuracy and simplicity of the proposed techniques. PMID:25485293

  3. Stochastic Integration H∞ Filter for Rapid Transfer Alignment of INS.

    PubMed

    Zhou, Dapeng; Guo, Lei

    2017-11-18

    The performance of an inertial navigation system (INS) operated on a moving base greatly depends on the accuracy of rapid transfer alignment (RTA). However, in practice, the coexistence of large initial attitude errors and uncertain observation noise statistics poses a great challenge for the estimation accuracy of misalignment angles. This study aims to develop a novel robust nonlinear filter, namely the stochastic integration H ∞ filter (SIH ∞ F) for improving both the accuracy and robustness of RTA. In this new nonlinear H ∞ filter, the stochastic spherical-radial integration rule is incorporated with the framework of the derivative-free H ∞ filter for the first time, and the resulting SIH ∞ F simultaneously attenuates the negative effect in estimations caused by significant nonlinearity and large uncertainty. Comparisons between the SIH ∞ F and previously well-known methodologies are carried out by means of numerical simulation and a van test. The results demonstrate that the newly-proposed method outperforms the cubature H ∞ filter. Moreover, the SIH ∞ F inherits the benefit of the traditional stochastic integration filter, but with more robustness in the presence of uncertainty.

  4. Computer program analyzes Buckling Of Shells Of Revolution with various wall construction, BOSOR

    NASA Technical Reports Server (NTRS)

    Almroth, B. O.; Bushnell, D.; Sobel, L. H.

    1968-01-01

    Computer program performs stability analyses for a wide class of shells without unduly restrictive approximations. The program uses numerical integration, finite difference of finite element techniques to solve with reasonable accuracy almost any buckling problem for shells exhibiting orthotropic behavior.

  5. A Strapdown Interial Navigation System/Beidou/Doppler Velocity Log Integrated Navigation Algorithm Based on a Cubature Kalman Filter

    PubMed Central

    Gao, Wei; Zhang, Ya; Wang, Jianguo

    2014-01-01

    The integrated navigation system with strapdown inertial navigation system (SINS), Beidou (BD) receiver and Doppler velocity log (DVL) can be used in marine applications owing to the fact that the redundant and complementary information from different sensors can markedly improve the system accuracy. However, the existence of multisensor asynchrony will introduce errors into the system. In order to deal with the problem, conventionally the sampling interval is subdivided, which increases the computational complexity. In this paper, an innovative integrated navigation algorithm based on a Cubature Kalman filter (CKF) is proposed correspondingly. A nonlinear system model and observation model for the SINS/BD/DVL integrated system are established to more accurately describe the system. By taking multi-sensor asynchronization into account, a new sampling principle is proposed to make the best use of each sensor's information. Further, CKF is introduced in this new algorithm to enable the improvement of the filtering accuracy. The performance of this new algorithm has been examined through numerical simulations. The results have shown that the positional error can be effectively reduced with the new integrated navigation algorithm. Compared with the traditional algorithm based on EKF, the accuracy of the SINS/BD/DVL integrated navigation system is improved, making the proposed nonlinear integrated navigation algorithm feasible and efficient. PMID:24434842

  6. Disentangling Complexity in Bayesian Automatic Adaptive Quadrature

    NASA Astrophysics Data System (ADS)

    Adam, Gheorghe; Adam, Sanda

    2018-02-01

    The paper describes a Bayesian automatic adaptive quadrature (BAAQ) solution for numerical integration which is simultaneously robust, reliable, and efficient. Detailed discussion is provided of three main factors which contribute to the enhancement of these features: (1) refinement of the m-panel automatic adaptive scheme through the use of integration-domain-length-scale-adapted quadrature sums; (2) fast early problem complexity assessment - enables the non-transitive choice among three execution paths: (i) immediate termination (exceptional cases); (ii) pessimistic - involves time and resource consuming Bayesian inference resulting in radical reformulation of the problem to be solved; (iii) optimistic - asks exclusively for subrange subdivision by bisection; (3) use of the weaker accuracy target from the two possible ones (the input accuracy specifications and the intrinsic integrand properties respectively) - results in maximum possible solution accuracy under minimum possible computing time.

  7. A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System

    PubMed Central

    Yu, Fei; Lv, Chongyang; Dong, Qianhui

    2016-01-01

    Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter. PMID:26999153

  8. ON THE MINIMAL ACCURACY REQUIRED FOR SIMULATING SELF-GRAVITATING SYSTEMS BY MEANS OF DIRECT N-BODY METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Portegies Zwart, Simon; Boekholt, Tjarda

    2014-04-10

    The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-bodymore » interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.« less

  9. An adaptive gridless methodology in one dimension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, N.T.; Hailey, C.E.

    1996-09-01

    Gridless numerical analysis offers great potential for accurately solving for flow about complex geometries or moving boundary problems. Because gridless methods do not require point connection, the mesh cannot twist or distort. The gridless method utilizes a Taylor series about each point to obtain the unknown derivative terms from the current field variable estimates. The governing equation is then numerically integrated to determine the field variables for the next iteration. Effects of point spacing and Taylor series order on accuracy are studied, and they follow similar trends of traditional numerical techniques. Introducing adaption by point movement using a spring analogymore » allows the solution method to track a moving boundary. The adaptive gridless method models linear, nonlinear, steady, and transient problems. Comparison with known analytic solutions is given for these examples. Although point movement adaption does not provide a significant increase in accuracy, it helps capture important features and provides an improved solution.« less

  10. An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.; Barnes, D. C.

    2011-08-01

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Ampére (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant-Friedrichs-Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicit time steps (unlike the earlier "energy-conserving" explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.

  11. Exact charge and energy conservation in implicit PIC with mapped computational meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Barnes, D. C.

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov Poisson formulation), ours is based on a nonlinearly converged Vlasov Amp re (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant Friedrichs Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicitmore » time steps (unlike the earlier energy-conserving explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.« less

  12. Accuracy of unmodified Stokes' integration in the R-C-R procedure for geoid computation

    NASA Astrophysics Data System (ADS)

    Ismail, Zahra; Jamet, Olivier

    2015-06-01

    Geoid determinations by the Remove-Compute-­Restore (R-C-R) technique involves the application of Stokes' integral on reduced gravity anomalies. Numerical Stokes' integration produces an error depending on the choice of the integration radius, grid resolution and Stokes' kernel function. In this work, we aim to evaluate the accuracy of Stokes' integral through a study on synthetic gravitational signals derived from EGM2008 on three different landscape areas with respect to the size of the integration domain and the resolution of the anomaly grid. The influence of the integration radius was studied earlier by several authors. Using real data, they found that the choice of relatively small radii (less than 1°) enables to reach an optimal accuracy. We observe a general behaviour coherent with these earlier studies. On the other hand, we notice that increasing the integration radius up to 2° or 2.5° might bring significantly better results. We note that, unlike the smallest radius corresponding to a local minimum of the error curve, the optimal radius in the range 0° to 6° depends on the terrain characteristics. We also find that the high frequencies, from degree 600, improve continuously with the integration radius in both semi-­mountainous and mountain areas. Finally, we note that the relative error of the computed geoid heights depends weakly on the anomaly spherical harmonic degree in the range from degree 200 to 2000. It remains greater than 10 % for any integration radii up to 6°. This result tends to prove that a one centimetre accuracy cannot be reached in semi-mountainous and mountainous regions with the unmodified Stokes' kernel.

  13. SELF-GRAVITATIONAL FORCE CALCULATION OF SECOND-ORDER ACCURACY FOR INFINITESIMALLY THIN GASEOUS DISKS IN POLAR COORDINATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hsiang-Hsu; Taam, Ronald E.; Yen, David C. C., E-mail: yen@math.fju.edu.tw

    Investigating the evolution of disk galaxies and the dynamics of proto-stellar disks can involve the use of both a hydrodynamical and a Poisson solver. These systems are usually approximated as infinitesimally thin disks using two-dimensional Cartesian or polar coordinates. In Cartesian coordinates, the calculations of the hydrodynamics and self-gravitational forces are relatively straightforward for attaining second-order accuracy. However, in polar coordinates, a second-order calculation of self-gravitational forces is required for matching the second-order accuracy of hydrodynamical schemes. We present a direct algorithm for calculating self-gravitational forces with second-order accuracy without artificial boundary conditions. The Poisson integral in polar coordinates ismore » expressed in a convolution form and the corresponding numerical complexity is nearly linear using a fast Fourier transform. Examples with analytic solutions are used to verify that the truncated error of this algorithm is of second order. The kernel integral around the singularity is applied to modify the particle method. The use of a softening length is avoided and the accuracy of the particle method is significantly improved.« less

  14. An interval precise integration method for transient unbalance response analysis of rotor system with uncertainty

    NASA Astrophysics Data System (ADS)

    Fu, Chao; Ren, Xingmin; Yang, Yongfeng; Xia, Yebao; Deng, Wangqun

    2018-07-01

    A non-intrusive interval precise integration method (IPIM) is proposed in this paper to analyze the transient unbalance response of uncertain rotor systems. The transfer matrix method (TMM) is used to derive the deterministic equations of motion of a hollow-shaft overhung rotor. The uncertain transient dynamic problem is solved by combing the Chebyshev approximation theory with the modified precise integration method (PIM). Transient response bounds are calculated by interval arithmetic of the expansion coefficients. Theoretical error analysis of the proposed method is provided briefly, and its accuracy is further validated by comparing with the scanning method in simulations. Numerical results show that the IPIM can keep good accuracy in vibration prediction of the start-up transient process. Furthermore, the proposed method can also provide theoretical guidance to other transient dynamic mechanical systems with uncertainties.

  15. Formal Solutions for Polarized Radiative Transfer. I. The DELO Family

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janett, Gioele; Carlin, Edgar S.; Steiner, Oskar

    The discussion regarding the numerical integration of the polarized radiative transfer equation is still open and the comparison between the different numerical schemes proposed by different authors in the past is not fully clear. Aiming at facilitating the comprehension of the advantages and drawbacks of the different formal solvers, this work presents a reference paradigm for their characterization based on the concepts of order of accuracy , stability , and computational cost . Special attention is paid to understand the numerical methods belonging to the Diagonal Element Lambda Operator family, in an attempt to highlight their specificities.

  16. Hierarchical matrices implemented into the boundary integral approaches for gravity field modelling

    NASA Astrophysics Data System (ADS)

    Čunderlík, Róbert; Vipiana, Francesca

    2017-04-01

    Boundary integral approaches applied for gravity field modelling have been recently developed to solve the geodetic boundary value problems numerically, or to process satellite observations, e.g. from the GOCE satellite mission. In order to obtain numerical solutions of "cm-level" accuracy, such approaches require very refined level of the disretization or resolution. This leads to enormous memory requirements that need to be reduced. An implementation of the Hierarchical Matrices (H-matrices) can significantly reduce a numerical complexity of these approaches. A main idea of the H-matrices is based on an approximation of the entire system matrix that is split into a family of submatrices. Large submatrices are stored in factorized representation, while small submatrices are stored in standard representation. This allows reducing memory requirements significantly while improving the efficiency. The poster presents our preliminary results of implementations of the H-matrices into the existing boundary integral approaches based on the boundary element method or the method of fundamental solution.

  17. A simple and efficient shear-flexible plate bending element

    NASA Technical Reports Server (NTRS)

    Chaudhuri, Reaz A.

    1987-01-01

    A shear-flexible triangular element formulation, which utilizes an assumed quadratic displacement potential energy approach and is numerically integrated using Gauss quadrature, is presented. The Reissner/Mindlin hypothesis of constant cross-sectional warping is directly applied to the three-dimensional elasticity theory to obtain a moderately thick-plate theory or constant shear-angle theory (CST), wherein the middle surface is no longer considered to be the reference surface and the two rotations are replaced by the two in-plane displacements as nodal variables. The resulting finite-element possesses 18 degrees of freedom (DOF). Numerical results are obtained for two different numerical integration schemes and a wide range of meshes and span-to-thickness ratios. These, when compared with available exact, series or finite-element solutions, demonstrate accuracy and rapid convergence characteristics of the present element. This is especially true in the case of thin to very thin plates, when the present element, used in conjunction with the reduced integration scheme, outperforms its counterpart, based on discrete Kirchhoff constraint theory (DKT).

  18. NLO renormalization in the Hamiltonian truncation

    NASA Astrophysics Data System (ADS)

    Elias-Miró, Joan; Rychkov, Slava; Vitale, Lorenzo G.

    2017-09-01

    Hamiltonian truncation (also known as "truncated spectrum approach") is a numerical technique for solving strongly coupled quantum field theories, in which the full Hilbert space is truncated to a finite-dimensional low-energy subspace. The accuracy of the method is limited only by the available computational resources. The renormalization program improves the accuracy by carefully integrating out the high-energy states, instead of truncating them away. In this paper, we develop the most accurate ever variant of Hamiltonian Truncation, which implements renormalization at the cubic order in the interaction strength. The novel idea is to interpret the renormalization procedure as a result of integrating out exactly a certain class of high-energy "tail states." We demonstrate the power of the method with high-accuracy computations in the strongly coupled two-dimensional quartic scalar theory and benchmark it against other existing approaches. Our work will also be useful for the future goal of extending Hamiltonian truncation to higher spacetime dimensions.

  19. Analysis of thin plates with holes by using exact geometrical representation within XFEM.

    PubMed

    Perumal, Logah; Tso, C P; Leng, Lim Thong

    2016-05-01

    This paper presents analysis of thin plates with holes within the context of XFEM. New integration techniques are developed for exact geometrical representation of the holes. Numerical and exact integration techniques are presented, with some limitations for the exact integration technique. Simulation results show that the proposed techniques help to reduce the solution error, due to the exact geometrical representation of the holes and utilization of appropriate quadrature rules. Discussion on minimum order of integration order needed to achieve good accuracy and convergence for the techniques presented in this work is also included.

  20. Exponential Methods for the Time Integration of Schroedinger Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cano, B.; Gonzalez-Pachon, A.

    2010-09-30

    We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.

  1. Accuracy of Estimating Solar Radiation Pressure for GEO Debris with Tumbling Effect

    NASA Astrophysics Data System (ADS)

    Chao, Chia-Chun George

    2009-03-01

    The accuracy of estimating solar radiation pressure for GEO debris is examined and demonstrated, via numerical simulations, by fitting a batch (months) of simulated position vectors. These simulated position vectors are generated from a "truth orbit" with added white noise using high-precision numerical integration tools. After the long-arc fit of the simulated observations (position vectors), one can accurately and reliably determine how close the estimated value of solar radiation pressure is to the truth. Results of this study show that the inherent accuracy in estimating the solar radiation pressure coefficient can be as good as 1% if a long-arc fit span up to 180 days is used and the satellite is not tumbling. The corresponding position prediction accuracy can be as good as, in maximum error, 1 km along in-track, 0.3 km along radial and 0.1 km along cross-track up to 30 days. Similar accuracies can be expected when the object is tumbling as long as the rate of attitude change is different from the orbit rate. Results of this study reveal an important phenomenon that the solar radiation pressure significantly affects the orbit motion when the spin rate is equal to the orbit rate.

  2. Multilevel ensemble Kalman filtering

    DOE PAGES

    Hoel, Hakon; Law, Kody J. H.; Tempone, Raul

    2016-06-14

    This study embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. Finally, the resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.

  3. Density-matrix-based algorithm for solving eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Polizzi, Eric

    2009-03-01

    A fast and stable numerical algorithm for solving the symmetric eigenvalue problem is presented. The technique deviates fundamentally from the traditional Krylov subspace iteration based techniques (Arnoldi and Lanczos algorithms) or other Davidson-Jacobi techniques and takes its inspiration from the contour integration and density-matrix representation in quantum mechanics. It will be shown that this algorithm—named FEAST—exhibits high efficiency, robustness, accuracy, and scalability on parallel architectures. Examples from electronic structure calculations of carbon nanotubes are presented, and numerical performances and capabilities are discussed.

  4. On the limits of numerical astronomical solutions used in paleoclimate studies

    NASA Astrophysics Data System (ADS)

    Zeebe, Richard E.

    2017-04-01

    Numerical solutions of the equations of the Solar System estimate Earth's orbital parameters in the past and represent the backbone of cyclostratigraphy and astrochronology, now widely applied in geology and paleoclimatology. Given one numerical realization of a Solar System model (i.e., obtained using one code or integrator package), various parameters determine the properties of the solution and usually limit its validity to a certain time period. Such limitations are denoted here as "internal" and include limitations due to (i) the underlying physics/physical model and (ii) numerics. The physics include initial coordinates and velocities of Solar System bodies, treatment of the Moon and asteroids, the Sun's quadrupole moment, and the intrinsic dynamics of the Solar System itself, i.e., its chaotic nature. Numerical issues include solver algorithm, numerical accuracy (e.g., time step), and round-off errors. At present, internal limitations seem to restrict the validity of astronomical solutions to perhaps the past 50 or 60 myr. However, little is currently known about "external" limitations, that is, how do different numerical realizations compare, say, between different investigators using different codes and integrators? Hitherto only two solutions for Earth's eccentricity appear to be used in paleoclimate studies, provided by two different groups that integrated the full Solar System equations over the past >100 myr (Laskar and coworkers and Varadi et al. 2003). In this contribution, I will present results from new Solar System integrations for Earth's eccentricity obtained using the integrator package HNBody (Rauch and Hamilton 2002). I will discuss the various internal limitations listed above within the framework of the present simulations. I will also compare the results to the existing solutions, the details of which are still being sorted out as several simulations are still running at the time of writing.

  5. Efficient grid-based techniques for density functional theory

    NASA Astrophysics Data System (ADS)

    Rodriguez-Hernandez, Juan Ignacio

    Understanding the chemical and physical properties of molecules and materials at a fundamental level often requires quantum-mechanical models for these substance's electronic structure. This type of many body quantum mechanics calculation is computationally demanding, hindering its application to substances with more than a few hundreds atoms. The supreme goal of many researches in quantum chemistry---and the topic of this dissertation---is to develop more efficient computational algorithms for electronic structure calculations. In particular, this dissertation develops two new numerical integration techniques for computing molecular and atomic properties within conventional Kohn-Sham-Density Functional Theory (KS-DFT) of molecular electronic structure. The first of these grid-based techniques is based on the transformed sparse grid construction. In this construction, a sparse grid is generated in the unit cube and then mapped to real space according to the pro-molecular density using the conditional distribution transformation. The transformed sparse grid was implemented in program deMon2k, where it is used as the numerical integrator for the exchange-correlation energy and potential in the KS-DFT procedure. We tested our grid by computing ground state energies, equilibrium geometries, and atomization energies. The accuracy on these test calculations shows that our grid is more efficient than some previous integration methods: our grids use fewer points to obtain the same accuracy. The transformed sparse grids were also tested for integrating, interpolating and differentiating in different dimensions (n = 1,2,3,6). The second technique is a grid-based method for computing atomic properties within QTAIM. It was also implemented in deMon2k. The performance of the method was tested by computing QTAIM atomic energies, charges, dipole moments, and quadrupole moments. For medium accuracy, our method is the fastest one we know of.

  6. A mesh-free approach to acoustic scattering from multiple spheres nested inside a large sphere by using diagonal translation operators.

    PubMed

    Hesford, Andrew J; Astheimer, Jeffrey P; Greengard, Leslie F; Waag, Robert C

    2010-02-01

    A multiple-scattering approach is presented to compute the solution of the Helmholtz equation when a number of spherical scatterers are nested in the interior of an acoustically large enclosing sphere. The solution is represented in terms of partial-wave expansions, and a linear system of equations is derived to enforce continuity of pressure and normal particle velocity across all material interfaces. This approach yields high-order accuracy and avoids some of the difficulties encountered when using integral equations that apply to surfaces of arbitrary shape. Calculations are accelerated by using diagonal translation operators to compute the interactions between spheres when the operators are numerically stable. Numerical results are presented to demonstrate the accuracy and efficiency of the method.

  7. A mesh-free approach to acoustic scattering from multiple spheres nested inside a large sphere by using diagonal translation operators

    PubMed Central

    Hesford, Andrew J.; Astheimer, Jeffrey P.; Greengard, Leslie F.; Waag, Robert C.

    2010-01-01

    A multiple-scattering approach is presented to compute the solution of the Helmholtz equation when a number of spherical scatterers are nested in the interior of an acoustically large enclosing sphere. The solution is represented in terms of partial-wave expansions, and a linear system of equations is derived to enforce continuity of pressure and normal particle velocity across all material interfaces. This approach yields high-order accuracy and avoids some of the difficulties encountered when using integral equations that apply to surfaces of arbitrary shape. Calculations are accelerated by using diagonal translation operators to compute the interactions between spheres when the operators are numerically stable. Numerical results are presented to demonstrate the accuracy and efficiency of the method. PMID:20136208

  8. A Jacobi collocation approximation for nonlinear coupled viscous Burgers' equation

    NASA Astrophysics Data System (ADS)

    Doha, Eid H.; Bhrawy, Ali H.; Abdelkawy, Mohamed A.; Hafez, Ramy M.

    2014-02-01

    This article presents a numerical approximation of the initial-boundary nonlinear coupled viscous Burgers' equation based on spectral methods. A Jacobi-Gauss-Lobatto collocation (J-GL-C) scheme in combination with the implicit Runge-Kutta-Nyström (IRKN) scheme are employed to obtain highly accurate approximations to the mentioned problem. This J-GL-C method, based on Jacobi polynomials and Gauss-Lobatto quadrature integration, reduces solving the nonlinear coupled viscous Burgers' equation to a system of nonlinear ordinary differential equation which is far easier to solve. The given examples show, by selecting relatively few J-GL-C points, the accuracy of the approximations and the utility of the approach over other analytical or numerical methods. The illustrative examples demonstrate the accuracy, efficiency, and versatility of the proposed algorithm.

  9. A proportional integral estimator-based clock synchronization protocol for wireless sensor networks.

    PubMed

    Yang, Wenlun; Fu, Minyue

    2017-11-01

    Clock synchronization is an issue of vital importance in applications of WSNs. This paper proposes a proportional integral estimator-based protocol (EBP) to achieve clock synchronization for wireless sensor networks. As each local clock skew gradually drifts, synchronization accuracy will decline over time. Compared with existing consensus-based approaches, the proposed synchronization protocol improves synchronization accuracy under time-varying clock skews. Moreover, by restricting synchronization error of clock skew into a relative small quantity, it could reduce periodic re-synchronization frequencies. At last, a pseudo-synchronous implementation for skew compensation is introduced as synchronous protocol is unrealistic in practice. Numerical simulations are shown to illustrate the performance of the proposed protocol. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Application of the boundary integral method to immiscible displacement problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masukawa, J.; Horne, R.N.

    1988-08-01

    This paper presents an application of the boundary integral method (BIM) to fluid displacement problems to demonstrate its usefulness in reservoir simulation. A method for solving two-dimensional (2D), piston-like displacement for incompressible fluids with good accuracy has been developed. Several typical example problems with repeated five-spot patterns were solved for various mobility ratios. The solutions were compared with the analytical solutions to demonstrate accuracy. Singularity programming was found to be a major advantage in handling flow in the vicinity of wells. The BIM was found to be an excellent way to solve immiscible displacement problems. Unlike analytic methods, it canmore » accommodate complex boundary shapes and does not suffer from numerical dispersion at the front.« less

  11. A Galleria Boundary Element Method for two-dimensional nonlinear magnetostatics

    NASA Astrophysics Data System (ADS)

    Brovont, Aaron D.

    The Boundary Element Method (BEM) is a numerical technique for solving partial differential equations that is used broadly among the engineering disciplines. The main advantage of this method is that one needs only to mesh the boundary of a solution domain. A key drawback is the myriad of integrals that must be evaluated to populate the full system matrix. To this day these integrals have been evaluated using numerical quadrature. In this research, a Galerkin formulation of the BEM is derived and implemented to solve two-dimensional magnetostatic problems with a focus on accurate, rapid computation. To this end, exact, closed-form solutions have been derived for all the integrals comprising the system matrix as well as those required to compute fields in post-processing; the need for numerical integration has been eliminated. It is shown that calculation of the system matrix elements using analytical solutions is 15-20 times faster than with numerical integration of similar accuracy. Furthermore, through the example analysis of a c-core inductor, it is demonstrated that the present BEM formulation is a competitive alternative to the Finite Element Method (FEM) for linear magnetostatic analysis. Finally, the BEM formulation is extended to analyze nonlinear magnetostatic problems via the Dual Reciprocity Method (DRBEM). It is shown that a coarse, meshless analysis using the DRBEM is able to achieve RMS error of 3-6% compared to a commercial FEM package in lightly saturated conditions.

  12. An effective algorithm for calculating the Chandrasekhar function

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2012-08-01

    Numerical values of the Chandrasekhar function are needed with high accuracy in evaluations of theoretical models describing electron transport in condensed matter. An algorithm for such calculations should be possibly fast and also accurate, e.g. an accuracy of 10 decimal digits is needed for some applications. Two of the integral representations of the Chandrasekhar function are prospective for constructing such an algorithm, but suitable transformations are needed to obtain a rapidly converging quadrature. A mixed algorithm is proposed in which the Chandrasekhar function is calculated from two algorithms, depending on the value of one of the arguments. Catalogue identifier: AEMC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 567 No. of bytes in distributed program, including test data, etc.: 4444 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a FORTRAN 90 compiler Operating system: Linux, Windows 7, Windows XP RAM: 0.6 Mb Classification: 2.4, 7.2 Nature of problem: An attempt has been made to develop a subroutine that calculates the Chandrasekhar function with high accuracy, of at least 10 decimal places. Simultaneously, this subroutine should be very fast. Both requirements stem from the theory of electron transport in condensed matter. Solution method: Two algorithms were developed, each based on a different integral representation of the Chandrasekhar function. The final algorithm is edited by mixing these two algorithms and by selecting ranges of the argument ω in which performance is the fastest. Restrictions: Two input parameters for the Chandrasekhar function, x and ω (notation used in the code), are restricted to the range: 0⩽x⩽1 and 0⩽ω⩽1, which is sufficient in numerous applications. Unusual features: The program uses the Romberg quadrature for integration. This quadrature is applicable to integrands that satisfy several requirements (the integrand does not vary rapidly and does not change sign in the integration interval; furthermore, the integrand is finite at the endpoints). Consequently, the analyzed integrands were transformed so that these requirements were satisfied. In effect, one can conveniently control the accuracy of integration. Although the desired fractional accuracy was set at 10-10, the obtained accuracy of the Chandrasekhar function was much higher, typically 13 decimal places. Running time: Between 0.7 and 5 milliseconds for one pair of arguments of the Chandrasekhar function.

  13. Reliability-Based Stability Analysis of Rock Slopes Using Numerical Analysis and Response Surface Method

    NASA Astrophysics Data System (ADS)

    Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.

    2017-08-01

    While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.

  14. Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.

    PubMed

    Rangan, Aaditya V; Cai, David

    2007-02-01

    We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as strongly fluctuating, high-conductance states, our methods are designed to achieve statistical accuracy when very large time-steps are used. Moreover, our methods can also achieve trajectory-wise accuracy when small time-steps are used.

  15. Numerical Study of the Generation of Linear Energy Transfer Spectra for Space Radiation Applications

    NASA Technical Reports Server (NTRS)

    Badavi, Francis F.; Wilson, John W.; Hunter, Abigail

    2005-01-01

    In analyzing charged particle spectra in space due to galactic cosmic rays (GCR) and solar particle events (SPE), the conversion of particle energy spectra into linear energy transfer (LET) distributions is a convenient guide in assessing biologically significant components of these spectra. The mapping of LET to energy is triple valued and can be defined only on open energy subintervals where the derivative of LET with respect to energy is not zero. Presented here is a well-defined numerical procedure which allows for the generation of LET spectra on the open energy subintervals that are integrable in spite of their singular nature. The efficiency and accuracy of the numerical procedures is demonstrated by providing examples of computed differential and integral LET spectra and their equilibrium components for historically large SPEs and 1977 solar minimum GCR environments. Due to the biological significance of tissue, all simulations are done with tissue as the target material.

  16. Numerical integration of the extended variable generalized Langevin equation with a positive Prony representable memory kernel.

    PubMed

    Baczewski, Andrew D; Bond, Stephen D

    2013-07-28

    Generalized Langevin dynamics (GLD) arise in the modeling of a number of systems, ranging from structured fluids that exhibit a viscoelastic mechanical response, to biological systems, and other media that exhibit anomalous diffusive phenomena. Molecular dynamics (MD) simulations that include GLD in conjunction with external and/or pairwise forces require the development of numerical integrators that are efficient, stable, and have known convergence properties. In this article, we derive a family of extended variable integrators for the Generalized Langevin equation with a positive Prony series memory kernel. Using stability and error analysis, we identify a superlative choice of parameters and implement the corresponding numerical algorithm in the LAMMPS MD software package. Salient features of the algorithm include exact conservation of the first and second moments of the equilibrium velocity distribution in some important cases, stable behavior in the limit of conventional Langevin dynamics, and the use of a convolution-free formalism that obviates the need for explicit storage of the time history of particle velocities. Capability is demonstrated with respect to accuracy in numerous canonical examples, stability in certain limits, and an exemplary application in which the effect of a harmonic confining potential is mapped onto a memory kernel.

  17. Numerical time-domain electromagnetics based on finite-difference and convolution

    NASA Astrophysics Data System (ADS)

    Lin, Yuanqu

    Time-domain methods posses a number of advantages over their frequency-domain counterparts for the solution of wideband, nonlinear, and time varying electromagnetic scattering and radiation phenomenon. Time domain integral equation (TDIE)-based methods, which incorporate the beneficial properties of integral equation method, are thus well suited for solving broadband scattering problems for homogeneous scatterers. Widespread adoption of TDIE solvers has been retarded relative to other techniques by their inefficiency, inaccuracy and instability. Moreover, two-dimensional (2D) problems are especially problematic, because 2D Green's functions have infinite temporal support, exacerbating these difficulties. This thesis proposes a finite difference delay modeling (FDDM) scheme for the solution of the integral equations of 2D transient electromagnetic scattering problems. The method discretizes the integral equations temporally using first- and second-order finite differences to map Laplace-domain equations into the Z domain before transforming to the discrete time domain. The resulting procedure is unconditionally stable because of the nature of the Laplace- to Z-domain mapping. The first FDDM method developed in this thesis uses second-order Lagrange basis functions with Galerkin's method for spatial discretization. The second application of the FDDM method discretizes the space using a locally-corrected Nystrom method, which accelerates the precomputation phase and achieves high order accuracy. The Fast Fourier Transform (FFT) is applied to accelerate the marching-on-time process in both methods. While FDDM methods demonstrate impressive accuracy and stability in solving wideband scattering problems for homogeneous scatterers, they still have limitations in analyzing interactions between several inhomogenous scatterers. Therefore, this thesis devises a multi-region finite-difference time-domain (MR-FDTD) scheme based on domain-optimal Green's functions for solving sparsely-populated problems. The scheme uses a discrete Green's function (DGF) on the FDTD lattice to truncate the local subregions, and thus reduces reflection error on the local boundary. A continuous Green's function (CGF) is implemented to pass the influence of external fields into each FDTD region which mitigates the numerical dispersion and anisotropy of standard FDTD. Numerical results will illustrate the accuracy and stability of the proposed techniques.

  18. An asteroids' motion simulation using smoothed ephemerides DE405, DE406, DE408, DE421, DE423 and DE722. (Russian Title: Прогнозирование движения астероидов с использованием сглаженных эфемерид DE405, DE406, DE408, DE421, DE423 и DE722)

    NASA Astrophysics Data System (ADS)

    Baturin, A. P.

    2011-07-01

    The results of major planets' and Moon's ephemerides smoothing by cubic polynomials are presented. Considered ephemerides are DE405, DE406, DE408, DE421, DE423 and DE722. The goal of the smoothig is an elimination of discontinu-ous behavior of interpolated coordinates and their derivatives at the junctions of adjacent interpolation intervals when calculations are made with 34-digit decimal accuracy. The reason of such a behavior is a limited 16-digit decimal accuracy of coefficients in ephemerides for interpolating Chebyshev's polynomials. Such discontinuity of perturbing bodies' coordinates signifi-cantly reduces the advantages of 34-digit calculations because the accuracy of numerical integration of asteroids' motion equations increases in this case just by 3 orders to compare with 16-digit calculations. It is demonstrated that the cubic-polynomial smoothing of ephemerides results in elimination of jumps of perturbing bodies' coordinates and their derivatives. This leads to increasing of numerical integration accuracy by 7-9 orders. All calculations in this work were made with 34-digit decimal accuracy on the computer cluster "Skif Cyberia" of Tomsk State University.

  19. Closed-Form Evaluation of Mutual Coupling in a Planar Array of Circular Apertures

    NASA Technical Reports Server (NTRS)

    Bailey, M. C.

    1996-01-01

    The integral expression for the mutual admittance between circular apertures in a planar array is evaluated in closed form. Very good accuracy is realized when compared with values that were obtained by numerical integration. Utilization of this closed-form expression, for all element pairs that are separated by more than one element spacing, yields extremely accurate results and significantly reduces the computation time that is required to analyze the performance of a large electronically scanning antenna array.

  20. Smart Phase Tuning in Microwave Photonic Integrated Circuits Toward Automated Frequency Multiplication by Design

    NASA Astrophysics Data System (ADS)

    Nabavi, N.

    2018-07-01

    The author investigates the monitoring methods for fine adjustment of the previously proposed on-chip architecture for frequency multiplication and translation of harmonics by design. Digital signal processing (DSP) algorithms are utilized to create an optimized microwave photonic integrated circuit functionality toward automated frequency multiplication. The implemented DSP algorithms are formed on discrete Fourier transform and optimization-based algorithms (Greedy and gradient-based algorithms), which are analytically derived and numerically compared based on the accuracy and speed of convergence criteria.

  1. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations.

    PubMed

    Peng, Bo; Kowalski, Karol

    2017-09-12

    The representation and storage of two-electron integral tensors are vital in large-scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this work, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of the atomic basis set, N b , ranging from ∼100 up to ∼2,000, the observed numerical scaling of our implementation shows [Formula: see text] versus [Formula: see text] cost of performing single CD on the two-electron integral tensor in most of the other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic orbital (AO) two-electron integral tensor from [Formula: see text] to [Formula: see text] with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled cluster formalism employing single and double excitations (CCSD) on several benchmark systems including the C 60 molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10 -4 to 10 -3 to give acceptable compromise between efficiency and accuracy.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David

    In the January 2002 edition of SIAM News, Nick Trefethen announced the '$100, 100-Digit Challenge'. In this note he presented ten easy-to-state but hard-to-solve problems of numerical analysis, and challenged readers to find each answer to ten-digit accuracy. Trefethen closed with the enticing comment: 'Hint: They're hard! If anyone gets 50 digits in total, I will be impressed.' This challenge obviously struck a chord in hundreds of numerical mathematicians worldwide, as 94 teams from 25 nations later submitted entries. Many of these submissions exceeded the target of 50 correct digits; in fact, 20 teams achieved a perfect score of 100more » correct digits. Trefethen had offered $100 for the best submission. Given the overwhelming response, a generous donor (William Browning, founder of Applied Mathematics, Inc.) provided additional funds to provide a $100 award to each of the 20 winning teams. Soon after the results were out, four participants, each from a winning team, got together and agreed to write a book about the problems and their solutions. The team is truly international: Bornemann is from Germany, Laurie is from South Africa, Wagon is from the USA, and Waldvogel is from Switzerland. This book provides some mathematical background for each problem, and then shows in detail how each of them can be solved. In fact, multiple solution techniques are mentioned in each case. The book describes how to extend these solutions to much larger problems and much higher numeric precision (hundreds or thousands of digit accuracy). The authors also show how to compute error bounds for the results, so that one can say with confidence that one's results are accurate to the level stated. Numerous numerical software tools are demonstrated in the process, including the commercial products Mathematica, Maple and Matlab. Computer programs that perform many of the algorithms mentioned in the book are provided, both in an appendix to the book and on a website. In the process, the authors take the reader on a wide-ranging tour of modern numerical mathematics, with enough background material so that even readers with little or no training in numerical analysis can follow. Here is a list of just a few of the topics visited: numerical quadrature (i.e., numerical integration), series summation, sequence extrapolation, contour integration, Fourier integrals, high-precision arithmetic, interval arithmetic, symbolic computing, numerical linear algebra, perturbation theory, Euler-Maclaurin summation, global minimization, eigenvalue methods, evolutionary algorithms, matrix preconditioning, random walks, special functions, elliptic functions, Monte-Carlo methods, and numerical differentiation.« less

  3. A boundary integral method for numerical computation of radar cross section of 3D targets using hybrid BEM/FEM with edge elements

    NASA Astrophysics Data System (ADS)

    Dodig, H.

    2017-11-01

    This contribution presents the boundary integral formulation for numerical computation of time-harmonic radar cross section for 3D targets. Method relies on hybrid edge element BEM/FEM to compute near field edge element coefficients that are associated with near electric and magnetic fields at the boundary of the computational domain. Special boundary integral formulation is presented that computes radar cross section directly from these edge element coefficients. Consequently, there is no need for near-to-far field transformation (NTFFT) which is common step in RCS computations. By the end of the paper it is demonstrated that the formulation yields accurate results for canonical models such as spheres, cubes, cones and pyramids. Method has demonstrated accuracy even in the case of dielectrically coated PEC sphere at interior resonance frequency which is common problem for computational electromagnetic codes.

  4. Multirate Particle-in-Cell Time Integration Techniques of Vlasov-Maxwell Equations for Collisionless Kinetic Plasma Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Chacon, Luis; Knoll, Dana Alan

    2015-07-31

    A multi-rate PIC formulation was developed that employs large timesteps for slow field evolution, and small (adaptive) timesteps for particle orbit integrations. Implementation is based on a JFNK solver with nonlinear elimination and moment preconditioning. The approach is free of numerical instabilities (ω peΔt >>1, and Δx >> λ D), and requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant gains (vs. conventional explicit PIC) may be possible for large scale simulations. The paper is organized as follows: Vlasov-Maxwell Particle-in-cell (PIC) methods for plasmas; Explicit, semi-implicit, and implicit time integrations; Implicit PIC formulation (Jacobian-Free Newton-Krylovmore » (JFNK) with nonlinear elimination allows different treatments of disparate scales, discrete conservation properties (energy, charge, canonical momentum, etc.)); Some numerical examples; and Summary.« less

  5. A modified moment-fitted integration scheme for X-FEM applications with history-dependent material data

    NASA Astrophysics Data System (ADS)

    Zhang, Ziyu; Jiang, Wen; Dolbow, John E.; Spencer, Benjamin W.

    2018-01-01

    We present a strategy for the numerical integration of partial elements with the eXtended finite element method (X-FEM). The new strategy is specifically designed for problems with propagating cracks through a bulk material that exhibits inelasticity. Following a standard approach with the X-FEM, as the crack propagates new partial elements are created. We examine quadrature rules that have sufficient accuracy to calculate stiffness matrices regardless of the orientation of the crack with respect to the element. This permits the number of integration points within elements to remain constant as a crack propagates, and for state data to be easily transferred between successive discretizations. In order to maintain weights that are strictly positive, we propose an approach that blends moment-fitted weights with volume-fraction based weights. To demonstrate the efficacy of this simple approach, we present results from numerical tests and examples with both elastic and plastic material response.

  6. Determining anisotropic conductivity using diffusion tensor imaging data in magneto-acoustic tomography with magnetic induction

    NASA Astrophysics Data System (ADS)

    Ammari, Habib; Qiu, Lingyun; Santosa, Fadil; Zhang, Wenlong

    2017-12-01

    In this paper we present a mathematical and numerical framework for a procedure of imaging anisotropic electrical conductivity tensor by integrating magneto-acoutic tomography with data acquired from diffusion tensor imaging. Magneto-acoustic tomography with magnetic induction (MAT-MI) is a hybrid, non-invasive medical imaging technique to produce conductivity images with improved spatial resolution and accuracy. Diffusion tensor imaging (DTI) is also a non-invasive technique for characterizing the diffusion properties of water molecules in tissues. We propose a model for anisotropic conductivity in which the conductivity is proportional to the diffusion tensor. Under this assumption, we propose an optimal control approach for reconstructing the anisotropic electrical conductivity tensor. We prove convergence and Lipschitz type stability of the algorithm and present numerical examples to illustrate its accuracy and feasibility.

  7. Computer simulation of solutions of polyharmonic equations in plane domain

    NASA Astrophysics Data System (ADS)

    Kazakova, A. O.

    2018-05-01

    A systematic study of plane problems of the theory of polyharmonic functions is presented. A method of reducing boundary problems for polyharmonic functions to the system of integral equations on the boundary of the domain is given and a numerical algorithm for simulation of solutions of this system is suggested. Particular attention is paid to the numerical solution of the main tasks when the values of the function and its derivatives are given. Test examples are considered that confirm the effectiveness and accuracy of the suggested algorithm.

  8. Recent Advances in Simulation of Eddy Current Testing of Tubes and Experimental Validations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reboud, C.; Premel, D.; Lesselier, D.

    2007-03-21

    Eddy current testing (ECT) is widely used in iron and steel industry for the inspection of tubes during manufacturing. A collaboration between CEA and the Vallourec Research Center led to the development of new numerical functionalities dedicated to the simulation of ECT of non-magnetic tubes by external probes. The achievement of experimental validations led us to the integration of these models into the CIVA platform. Modeling approach and validation results are discussed here. A new numerical scheme is also proposed in order to improve the accuracy of the model.

  9. Recent Advances in Simulation of Eddy Current Testing of Tubes and Experimental Validations

    NASA Astrophysics Data System (ADS)

    Reboud, C.; Prémel, D.; Lesselier, D.; Bisiaux, B.

    2007-03-01

    Eddy current testing (ECT) is widely used in iron and steel industry for the inspection of tubes during manufacturing. A collaboration between CEA and the Vallourec Research Center led to the development of new numerical functionalities dedicated to the simulation of ECT of non-magnetic tubes by external probes. The achievement of experimental validations led us to the integration of these models into the CIVA platform. Modeling approach and validation results are discussed here. A new numerical scheme is also proposed in order to improve the accuracy of the model.

  10. Efficient numerical evaluation of Feynman integrals

    NASA Astrophysics Data System (ADS)

    Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran

    2016-03-01

    Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)

  11. A modified precise integration method for transient dynamic analysis in structural systems with multiple damping models

    NASA Astrophysics Data System (ADS)

    Ding, Zhe; Li, Li; Hu, Yujin

    2018-01-01

    Sophisticated engineering systems are usually assembled by subcomponents with significantly different levels of energy dissipation. Therefore, these damping systems often contain multiple damping models and lead to great difficulties in analyzing. This paper aims at developing a time integration method for structural systems with multiple damping models. The dynamical system is first represented by a generally damped model. Based on this, a new extended state-space method for the damped system is derived. A modified precise integration method with Gauss-Legendre quadrature is then proposed. The numerical stability and accuracy of the proposed integration method are discussed in detail. It is verified that the method is conditionally stable and has inherent algorithmic damping, period error and amplitude decay. Numerical examples are provided to assess the performance of the proposed method compared with other methods. It is demonstrated that the method is more accurate than other methods with rather good efficiency and the stable condition is easy to be satisfied in practice.

  12. Mitigation of Atmospheric Delay in SAR Absolute Ranging Using Global Numerical Weather Prediction Data: Corner Reflector Experiments at 3 Different Test Sites

    NASA Astrophysics Data System (ADS)

    Cong, Xiaoying; Balss, Ulrich; Eineder, Michael

    2015-04-01

    The atmospheric delay due to vertical stratification, the so-called stratified atmospheric delay, has a great impact on both interferometric and absolute range measurements. In our current researches [1][2][3], centimeter-range accuracy has been proven based on Corner Reflector (CR) based measurements by applying atmospheric delay correction using the Zenith Path Delay (ZPD) corrections derived from nearby Global Positioning System (GPS) stations. For a global usage, an effective method has been introduced to estimate the stratified delay based on global 4-dimensional Numerical Weather Prediction (NWP) products: the direct integration method [4][5]. Two products, ERA-Interim and operational data, provided by European Centre for Medium-Range Weather Forecast (ECMWF) are used to integrate the stratified delay. In order to access the integration accuracy, a validation approach is investigated based on ZPD derived from six permanent GPS stations located in different meteorological conditions. Range accuracy at centimeter level is demonstrated using both ECMWF products. Further experiments have been carried out in order to determine the best interpolation method by analyzing the temporal and spatial correlation of atmospheric delay using both ECMWF and GPS ZPD. Finally, the integrated atmospheric delays in slant direction (Slant Path Delay, SPD) have been applied instead of the GPS ZPD for CR experiments at three different test sites with more than 200 TerraSAR-X High Resolution SpotLight (HRSL) images. The delay accuracy is around 1-3 cm depending on the location of test site due to the local water vapor variation and the acquisition time/date. [1] Eineder M., Minet C., Steigenberger P., et al. Imaging geodesy - Toward centimeter-level ranging accuracy with TerraSAR-X. Geoscience and Remote Sensing, IEEE Transactions on, 2011, 49(2): 661-671. [2] Balss U., Gisinger C., Cong X. Y., et al. Precise Measurements on the Absolute Localization Accuracy of TerraSAR-X on the Base of Far-Distributed Test Sites; EUSAR 2014; 10th European Conference on Synthetic Aperture Radar; Proceedings of. VDE, 2014: 1-4. [3] Eineder M., Balss U., Gisinger C., et al. TerraSAR-X pixel localization accuracy: Approaching the centimeter level, Geoscience and Remote Sensing Symposium (IGARSS), 2014 IEEE International. IEEE, 2014: 2669-2670. [4] Cong X., Balss U., Eineder M., et al. Imaging Geodesy -- Centimeter-Level Ranging Accuracy With TerraSAR-X: An Update. Geoscience and Remote Sensing Letters, IEEE, 2012, 9(5): 948-952. [5] Cong X. SAR Interferometry for Volcano Monitoring: 3D-PSI Analysis and Mitigation of Atmospheric Refractivity. München, Technische Universität München, Dissertation, 2014.

  13. Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.

    PubMed

    Shelley, M J; Tao, L

    2001-01-01

    To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.

  14. Frequency-independent approach to calculate physical optics radiations with the quadratic concave phase variations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Yu Mao, E-mail: yumaowu@fudan.edu.cn; Teng, Si Jia, E-mail: sjteng12@fudan.edu.cn

    In this work, we develop the numerical steepest descent path (NSDP) method to calculate the physical optics (PO) radiations with the quadratic concave phase variations. With the surface integral equation method, the physical optics (PO) scattered fields are formulated and further reduced to the surface integrals. The high frequency physical critical points contributions, including the stationary phase points, the boundary resonance points and the vertex points are comprehensively studied via the proposed NSDP method. The key contributions of this work are twofold. One is that together with the PO integrals taking the quadratic parabolic and hyperbolic phase terms, this workmore » makes the NSDP theory be complete for treating the PO integrals with quadratic phase variations. Another is that, in order to illustrate the transition effect of the high frequency physical critical points, in this work, we consider and further extend the NSDP method to calculate the PO integrals with the coalescence of the high frequency critical points. Numerical results for the highly oscillatory PO integral with the coalescence of the critical points are given to verify the efficiency of the proposed NSDP method. The NSDP method could achieve the frequency independent computational workload and error controllable accuracy in all the numerical experiments, especially for the case of the coalescence of the high frequency critical points.« less

  15. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Bo; Kowalski, Karol

    The representation and storage of two-electron integral tensors are vital in large- scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this paper, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition ofmore » the two-electron integral tensor in our implementation. For the size of atomic basis set N_b ranging from ~ 100 up to ~ 2, 000, the observed numerical scaling of our implementation shows O(N_b^{2.5~3}) versus O(N_b^{3~4}) of single CD in most of other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic-orbital (AO) two-electron integral tensor from O(N_b^4) to O(N_b^2 log_{10}(N_b)) with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled- cluster formalism employing single and double excitations (CCSD) on several bench- mark systems including the C_{60} molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10^{-4} to 10^{-3} to give acceptable compromise between efficiency and accuracy.« less

  16. Comparison of numerical techniques for integration of stiff ordinary differential equations arising in combustion chemistry

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1984-01-01

    The efficiency and accuracy of several algorithms recently developed for the efficient numerical integration of stiff ordinary differential equations are compared. The methods examined include two general-purpose codes, EPISODE and LSODE, and three codes (CHEMEQ, CREK1D, and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes are applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code currently available for the integration of combustion kinetic rate equations. An important finding is that an interactive solution of the algebraic energy conservation equation to compute the temperature does not result in significant errors. In addition, this method is more efficient than evaluating the temperature by integrating its time derivative. Significant reductions in computational work are realized by updating the rate constants (k = at(supra N) N exp(-E/RT) only when the temperature change exceeds an amount delta T that is problem dependent. An approximate expression for the automatic evaluation of delta T is derived and is shown to result in increased efficiency.

  17. A boundary integral equation method using auxiliary interior surface approach for acoustic radiation and scattering in two dimensions.

    PubMed

    Yang, S A

    2002-10-01

    This paper presents an effective solution method for predicting acoustic radiation and scattering fields in two dimensions. The difficulty of the fictitious characteristic frequency is overcome by incorporating an auxiliary interior surface that satisfies certain boundary condition into the body surface. This process gives rise to a set of uniquely solvable boundary integral equations. Distributing monopoles with unknown strengths over the body and interior surfaces yields the simple source formulation. The modified boundary integral equations are further transformed to ordinary ones that contain nonsingular kernels only. This implementation allows direct application of standard quadrature formulas over the entire integration domain; that is, the collocation points are exactly the positions at which the integration points are located. Selecting the interior surface is an easy task. Moreover, only a few corresponding interior nodal points are sufficient for the computation. Numerical calculations consist of the acoustic radiation and scattering by acoustically hard elliptic and rectangular cylinders. Comparisons with analytical solutions are made. Numerical results demonstrate the efficiency and accuracy of the current solution method.

  18. Numerical simulation of the transition to chaos in a dissipative Duffing oscillator with two-frequency excitation

    NASA Astrophysics Data System (ADS)

    Zavrazhina, T. V.

    2007-10-01

    A mathematical modeling technique is proposed for oscillation chaotization in an essentially nonlinear dissipative Duffing oscillator with two-frequency excitation on an invariant torus in ℝ2. The technique is based on the joint application of the parameter continuation method, Floquet stability criteria, bifurcation theory, and the Everhart high-accuracy numerical integration method. This approach is used for the numerical construction of subharmonic solutions in the case when the oscillator passes to chaos through a sequence of period-multiplying bifurcations. The value of a universal constant obtained earlier by the author while investigating oscillation chaotization in dissipative oscillators with single-frequency periodic excitation is confirmed.

  19. An efficient implementation of semi-numerical computation of the Hartree-Fock exchange on the Intel Phi processor

    NASA Astrophysics Data System (ADS)

    Liu, Fenglai; Kong, Jing

    2018-07-01

    Unique technical challenges and their solutions for implementing semi-numerical Hartree-Fock exchange on the Phil Processor are discussed, especially concerning the single- instruction-multiple-data type of processing and small cache size. Benchmark calculations on a series of buckyball molecules with various Gaussian basis sets on a Phi processor and a six-core CPU show that the Phi processor provides as much as 12 times of speedup with large basis sets compared with the conventional four-center electron repulsion integration approach performed on the CPU. The accuracy of the semi-numerical scheme is also evaluated and found to be comparable to that of the resolution-of-identity approach.

  20. Research of small bodies motion prognosis effectivity on cluster "Skif Cyberia". (Russian Title: Исследование эффективности прогнозирования движения малых тел Солнечной системы на кластере "Skif Cyberia")

    NASA Astrophysics Data System (ADS)

    Baturin, A. P.

    2010-12-01

    The results of the experimental estimations on cluster "Skif Cyberia" of Everhart's numerical integration accuracy and rapidness are presented. The integration has been carried out for celestial bodies' equations of motion such as N-body problem equations and perturbed two-body problem equations. In the last case the perturbing bodies' coordinates are being taked during calculations from the ephemeris DE406. The accuracy and rapidness estimations have been made by means of forward and backward integrations with various values of Everhart method parameters of motion equations of the short-periodic comet Herschel-Rigollet. The optimal combinations of these parameters have been obtained. The research has been made both for 16-digit decimal accuracy and for 34-digit one.

  1. Dictionary-based image reconstruction for superresolution in integrated circuit imaging.

    PubMed

    Cilingiroglu, T Berkin; Uyar, Aydan; Tuysuzoglu, Ahmet; Karl, W Clem; Konrad, Janusz; Goldberg, Bennett B; Ünlü, M Selim

    2015-06-01

    Resolution improvement through signal processing techniques for integrated circuit imaging is becoming more crucial as the rapid decrease in integrated circuit dimensions continues. Although there is a significant effort to push the limits of optical resolution for backside fault analysis through the use of solid immersion lenses, higher order laser beams, and beam apodization, signal processing techniques are required for additional improvement. In this work, we propose a sparse image reconstruction framework which couples overcomplete dictionary-based representation with a physics-based forward model to improve resolution and localization accuracy in high numerical aperture confocal microscopy systems for backside optical integrated circuit analysis. The effectiveness of the framework is demonstrated on experimental data.

  2. Comparison of mixed-mode stress-intensity factors obtained through displacement correlation, J-integral formulation, and modified crack-closure integral

    NASA Astrophysics Data System (ADS)

    Bittencourt, Tulio N.; Barry, Ahmabou; Ingraffea, Anthony R.

    This paper presents a comparison among stress-intensity factors for mixed-mode two-dimensional problems obtained through three different approaches: displacement correlation, J-integral, and modified crack-closure integral. All mentioned procedures involve only one analysis step and are incorporated in the post-processor page of a finite element computer code for fracture mechanics analysis (FRANC). Results are presented for a closed-form solution problem under mixed-mode conditions. The accuracy of these described methods then is discussed and analyzed in the framework of their numerical results. The influence of the differences among the three methods on the predicted crack trajectory of general problems is also discussed.

  3. An accurate and efficient acoustic eigensolver based on a fast multipole BEM and a contour integral method

    NASA Astrophysics Data System (ADS)

    Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng

    2016-01-01

    An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.

  4. Direct numerical solution of the Ornstein-Zernike integral equation and spatial distribution of water around hydrophobic molecules

    NASA Astrophysics Data System (ADS)

    Ikeguchi, Mitsunori; Doi, Junta

    1995-09-01

    The Ornstein-Zernike integral equation (OZ equation) has been used to evaluate the distribution function of solvents around solutes, but its numerical solution is difficult for molecules with a complicated shape. This paper proposes a numerical method to directly solve the OZ equation by introducing the 3D lattice. The method employs no approximation the reference interaction site model (RISM) equation employed. The method enables one to obtain the spatial distribution of spherical solvents around solutes with an arbitrary shape. Numerical accuracy is sufficient when the grid-spacing is less than 0.5 Å for solvent water. The spatial water distribution around a propane molecule is demonstrated as an example of a nonspherical hydrophobic molecule using iso-value surfaces. The water model proposed by Pratt and Chandler is used. The distribution agrees with the molecular dynamics simulation. The distribution increases offshore molecular concavities. The spatial distribution of water around 5α-cholest-2-ene (C27H46) is visualized using computer graphics techniques and a similar trend is observed.

  5. An ultra-accurate numerical method in the design of liquid phononic crystals with hard inclusion

    NASA Astrophysics Data System (ADS)

    Li, Eric; He, Z. C.; Wang, G.; Liu, G. R.

    2017-12-01

    The phononics crystals (PCs) are periodic man-made composite materials. In this paper, a mass-redistributed finite element method (MR-FEM) is formulated to study the wave propagation within liquid PCs with hard inclusion. With a perfect balance between stiffness and mass in the MR-FEM model, the dispersion error of longitudinal wave is minimized by redistribution of mass. Such tuning can be easily achieved by adjusting the parameter r that controls the location of integration points of mass matrix. More importantly, the property of mass conservation in the MR-FEM model indicates that the locations of integration points inside or outside the element are immaterial. Four numerical examples are studied in this work, including liquid PCs with cross and circle hard inclusions, different size of inclusion and defect. Compared with standard finite element method, the numerical results have verified the accuracy and effectiveness of MR-FEM. The proposed MR-FEM is a unique and innovative numerical approach with its outstanding features, which has strong potentials to study the stress wave within multi-physics PCs.

  6. A numerical method for simulations of rigid fiber suspensions

    NASA Astrophysics Data System (ADS)

    Tornberg, Anna-Karin; Gustavsson, Katarina

    2006-06-01

    In this paper, we present a numerical method designed to simulate the challenging problem of the dynamics of slender fibers immersed in an incompressible fluid. Specifically, we consider microscopic, rigid fibers, that sediment due to gravity. Such fibers make up the micro-structure of many suspensions for which the macroscopic dynamics are not well understood. Our numerical algorithm is based on a non-local slender body approximation that yields a system of coupled integral equations, relating the forces exerted on the fibers to their velocities, which takes into account the hydrodynamic interactions of the fluid and the fibers. The system is closed by imposing the constraints of rigid body motions. The fact that the fibers are straight have been further exploited in the design of the numerical method, expanding the force on Legendre polynomials to take advantage of the specific mathematical structure of a finite-part integral operator, as well as introducing analytical quadrature in a manner possible only for straight fibers. We have carefully treated issues of accuracy, and present convergence results for all numerical parameters before we finally discuss the results from simulations including a larger number of fibers.

  7. Integral Method of Boundary Characteristics: Neumann Condition

    NASA Astrophysics Data System (ADS)

    Kot, V. A.

    2018-05-01

    A new algorithm, based on systems of identical equalities with integral and differential boundary characteristics, is proposed for solving boundary-value problems on the heat conduction in bodies canonical in shape at a Neumann boundary condition. Results of a numerical analysis of the accuracy of solving heat-conduction problems with variable boundary conditions with the use of this algorithm are presented. The solutions obtained with it can be considered as exact because their errors comprise hundredths and ten-thousandths of a persent for a wide range of change in the parameters of a problem.

  8. Boundary element modelling of dynamic behavior of piecewise homogeneous anisotropic elastic solids

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Markov, I. P.; Litvinchuk, S. Yu

    2018-04-01

    A traditional direct boundary integral equations method is applied to solve three-dimensional dynamic problems of piecewise homogeneous linear elastic solids. The materials of homogeneous parts are considered to be generally anisotropic. The technique used to solve the boundary integral equations is based on the boundary element method applied together with the Radau IIA convolution quadrature method. A numerical example of suddenly loaded 3D prismatic rod consisting of two subdomains with different anisotropic elastic properties is presented to verify the accuracy of the proposed formulation.

  9. A Survey of Symplectic and Collocation Integration Methods for Orbit Propagation

    NASA Technical Reports Server (NTRS)

    Jones, Brandon A.; Anderson, Rodney L.

    2012-01-01

    Demands on numerical integration algorithms for astrodynamics applications continue to increase. Common methods, like explicit Runge-Kutta, meet the orbit propagation needs of most scenarios, but more specialized scenarios require new techniques to meet both computational efficiency and accuracy needs. This paper provides an extensive survey on the application of symplectic and collocation methods to astrodynamics. Both of these methods benefit from relatively recent theoretical developments, which improve their applicability to artificial satellite orbit propagation. This paper also details their implementation, with several tests demonstrating their advantages and disadvantages.

  10. A boundary element method for steady incompressible thermoviscous flow

    NASA Technical Reports Server (NTRS)

    Dargush, G. F.; Banerjee, P. K.

    1991-01-01

    A boundary element formulation is presented for moderate Reynolds number, steady, incompressible, thermoviscous flows. The governing integral equations are written exclusively in terms of velocities and temperatures, thus eliminating the need for the computation of any gradients. Furthermore, with the introduction of reference velocities and temperatures, volume modeling can often be confined to only a small portion of the problem domain, typically near obstacles or walls. The numerical implementation includes higher order elements, adaptive integration and multiregion capability. Both the integral formulation and implementation are discussed in detail. Several examples illustrate the high level of accuracy that is obtainable with the current method.

  11. High-order boundary integral equation solution of high frequency wave scattering from obstacles in an unbounded linearly stratified medium

    NASA Astrophysics Data System (ADS)

    Barnett, Alex H.; Nelson, Bradley J.; Mahoney, J. Matthew

    2015-09-01

    We apply boundary integral equations for the first time to the two-dimensional scattering of time-harmonic waves from a smooth obstacle embedded in a continuously-graded unbounded medium. In the case we solve, the square of the wavenumber (refractive index) varies linearly in one coordinate, i.e. (Δ + E +x2) u (x1 ,x2) = 0 where E is a constant; this models quantum particles of fixed energy in a uniform gravitational field, and has broader applications to stratified media in acoustics, optics and seismology. We evaluate the fundamental solution efficiently with exponential accuracy via numerical saddle-point integration, using the truncated trapezoid rule with typically 102 nodes, with an effort that is independent of the frequency parameter E. By combining with a high-order Nyström quadrature, we are able to solve the scattering from obstacles 50 wavelengths across to 11 digits of accuracy in under a minute on a desktop or laptop.

  12. An Efficient Numerical Method for Computing Synthetic Seismograms for a Layered Half-space with Sources and Receivers at Close or Same Depths

    NASA Astrophysics Data System (ADS)

    Zhang, H.-m.; Chen, X.-f.; Chang, S.

    - It is difficult to compute synthetic seismograms for a layered half-space with sources and receivers at close to or the same depths using the generalized R/T coefficient method (Kennett, 1983; Luco and Apsel, 1983; Yao and Harkrider, 1983; Chen, 1993), because the wavenumber integration converges very slowly. A semi-analytic method for accelerating the convergence, in which part of the integration is implemented analytically, was adopted by some authors (Apsel and Luco, 1983; Hisada, 1994, 1995). In this study, based on the principle of the Repeated Averaging Method (Dahlquist and Björck, 1974; Chang, 1988), we propose an alternative, efficient, numerical method, the peak-trough averaging method (PTAM), to overcome the difficulty mentioned above. Compared with the semi-analytic method, PTAM is not only much simpler mathematically and easier to implement in practice, but also more efficient. Using numerical examples, we illustrate the validity, accuracy and efficiency of the new method.

  13. Solution of the advection-dispersion equation by a finite-volume eulerian-lagrangian local adjoint method

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1992-01-01

    A finite-volume Eulerian-Lagrangian local adjoint method for solution of the advection-dispersion equation is developed and discussed. The method is mass conservative and can solve advection-dominated ground-water solute-transport problems accurately and efficiently. An integrated finite-difference approach is used in the method. A key component of the method is that the integral representing the mass-storage term is evaluated numerically at the current time level. Integration points, and the mass associated with these points, are then forward tracked up to the next time level. The number of integration points required to reach a specified level of accuracy is problem dependent and increases as the sharpness of the simulated solute front increases. Integration points are generally equally spaced within each grid cell. For problems involving variable coefficients it has been found to be advantageous to include additional integration points at strategic locations in each well. These locations are determined by backtracking. Forward tracking of boundary fluxes by the method alleviates problems that are encountered in the backtracking approaches of most characteristic methods. A test problem is used to illustrate that the new method offers substantial advantages over other numerical methods for a wide range of problems.

  14. Numerical computation of gravitational field for general axisymmetric objects

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2016-10-01

    We developed a numerical method to compute the gravitational field of a general axisymmetric object. The method (I) numerically evaluates a double integral of the ring potential by the split quadrature method using the double exponential rules, and (II) derives the acceleration vector by numerically differentiating the numerically integrated potential by Ridder's algorithm. Numerical comparison with the analytical solutions for a finite uniform spheroid and an infinitely extended object of the Miyamoto-Nagai density distribution confirmed the 13- and 11-digit accuracy of the potential and the acceleration vector computed by the method, respectively. By using the method, we present the gravitational potential contour map and/or the rotation curve of various axisymmetric objects: (I) finite uniform objects covering rhombic spindles and circular toroids, (II) infinitely extended spheroids including Sérsic and Navarro-Frenk-White spheroids, and (III) other axisymmetric objects such as an X/peanut-shaped object like NGC 128, a power-law disc with a central hole like the protoplanetary disc of TW Hya, and a tear-drop-shaped toroid like an axisymmetric equilibrium solution of plasma charge distribution in an International Thermonuclear Experimental Reactor-like tokamak. The method is directly applicable to the electrostatic field and will be easily extended for the magnetostatic field. The FORTRAN 90 programs of the new method and some test results are electronically available.

  15. A fast numerical solution of scattering by a cylinder: Spectral method for the boundary integral equations

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.

    1994-01-01

    It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.

  16. Extension of CE/SE method to non-equilibrium dissociating flows

    NASA Astrophysics Data System (ADS)

    Wen, C. Y.; Saldivar Massimi, H.; Shen, H.

    2018-03-01

    In this study, the hypersonic non-equilibrium flows over rounded nose geometries are numerically investigated by a robust conservation element and solution element (CE/SE) code, which is based on hybrid meshes consisting of triangular and quadrilateral elements. The dissociating and recombination chemical reactions as well as the vibrational energy relaxation are taken into account. The stiff source terms are solved by an implicit trapezoidal method of integration. Comparison with laboratory and numerical cases are provided to demonstrate the accuracy and reliability of the present CE/SE code in simulating hypersonic non-equilibrium flows.

  17. Semi-analytical Karhunen-Loeve representation of irregular waves based on the prolate spheroidal wave functions

    NASA Astrophysics Data System (ADS)

    Lee, Gibbeum; Cho, Yeunwoo

    2018-01-01

    A new semi-analytical approach is presented to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of direct numerical approach to this matrix eigenvalue problem, which may suffer from the computational inaccuracy for big data, a pair of integral and differential equations are considered, which are related to the so-called prolate spheroidal wave functions (PSWF). First, the PSWF is expressed as a summation of a small number of the analytical Legendre functions. After substituting them into the PSWF differential equation, a much smaller size matrix eigenvalue problem is obtained than the direct numerical K-L matrix eigenvalue problem. By solving this with a minimal numerical effort, the PSWF and the associated eigenvalue of the PSWF differential equation are obtained. Then, the eigenvalue of the PSWF integral equation is analytically expressed by the functional values of the PSWF and the eigenvalues obtained in the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data such as ordinary irregular waves. It is found that, with the same accuracy, the required memory size of the present method is smaller than that of the direct numerical K-L representation and the computation time of the present method is shorter than that of the semi-analytical method based on the sinusoidal functions.

  18. Integral methods of solving boundary-value problems of nonstationary heat conduction and their comparative analysis

    NASA Astrophysics Data System (ADS)

    Kot, V. A.

    2017-11-01

    The modern state of approximate integral methods used in applications, where the processes of heat conduction and heat and mass transfer are of first importance, is considered. Integral methods have found a wide utility in different fields of knowledge: problems of heat conduction with different heat-exchange conditions, simulation of thermal protection, Stefantype problems, microwave heating of a substance, problems on a boundary layer, simulation of a fluid flow in a channel, thermal explosion, laser and plasma treatment of materials, simulation of the formation and melting of ice, inverse heat problems, temperature and thermal definition of nanoparticles and nanoliquids, and others. Moreover, polynomial solutions are of interest because the determination of a temperature (concentration) field is an intermediate stage in the mathematical description of any other process. The following main methods were investigated on the basis of the error norms: the Tsoi and Postol’nik methods, the method of integral relations, the Gudman integral method of heat balance, the improved Volkov integral method, the matched integral method, the modified Hristov method, the Mayer integral method, the Kudinov method of additional boundary conditions, the Fedorov boundary method, the method of weighted temperature function, the integral method of boundary characteristics. It was established that the two last-mentioned methods are characterized by high convergence and frequently give solutions whose accuracy is not worse that the accuracy of numerical solutions.

  19. Analysis of real-time numerical integration methods applied to dynamic clamp experiments.

    PubMed

    Butera, Robert J; McCarthy, Maeve L

    2004-12-01

    Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.

  20. Elastic interactions of a fatigue crack with a micro-defect by the mixed boundary integral equation method

    NASA Technical Reports Server (NTRS)

    Lua, Yuan J.; Liu, Wing K.; Belytschko, Ted

    1993-01-01

    In this paper, the mixed boundary integral equation method is developed to study the elastic interactions of a fatigue crack and a micro-defect such as a void, a rigid inclusion or a transformation inclusion. The method of pseudo-tractions is employed to study the effect of a transformation inclusion. An enriched element which incorporates the mixed-mode stress intensity factors is applied to characterize the singularity at a moving crack tip. In order to evaluate the accuracy of the numerical procedure, the analysis of a crack emanating from a circular hole in a finite plate is performed and the results are compared with the available numerical solution. The effects of various micro-defects on the crack path and fatigue life are investigated. The results agree with the experimental observations.

  1. Comparison of four stable numerical methods for Abel's integral equation

    NASA Technical Reports Server (NTRS)

    Murio, Diego A.; Mejia, Carlos E.

    1991-01-01

    The 3-D image reconstruction from cone-beam projections in computerized tomography leads naturally, in the case of radial symmetry, to the study of Abel-type integral equations. If the experimental information is obtained from measured data, on a discrete set of points, special methods are needed in order to restore continuity with respect to the data. A new combined Regularized-Adjoint-Conjugate Gradient algorithm, together with two different implementations of the Mollification Method (one based on a data filtering technique and the other on the mollification of the kernal function) and a regularization by truncation method (initially proposed for 2-D ray sample schemes and more recently extended to 3-D cone-beam image reconstruction) are extensively tested and compared for accuracy and numerical stability as functions of the level of noise in the data.

  2. Mixed time integration methods for transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1982-01-01

    The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.

  3. Mixed time integration methods for transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1983-01-01

    The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.

  4. An accurate real-time model of maglev planar motor based on compound Simpson numerical integration

    NASA Astrophysics Data System (ADS)

    Kou, Baoquan; Xing, Feng; Zhang, Lu; Zhou, Yiheng; Liu, Jiaqi

    2017-05-01

    To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.

  5. A critical analysis of the accuracy of several numerical techniques for combustion kinetic rate equations

    NASA Technical Reports Server (NTRS)

    Radhadrishnan, Krishnan

    1993-01-01

    A detailed analysis of the accuracy of several techniques recently developed for integrating stiff ordinary differential equations is presented. The techniques include two general-purpose codes EPISODE and LSODE developed for an arbitrary system of ordinary differential equations, and three specialized codes CHEMEQ, CREK1D, and GCKP4 developed specifically to solve chemical kinetic rate equations. The accuracy study is made by application of these codes to two practical combustion kinetics problems. Both problems describe adiabatic, homogeneous, gas-phase chemical reactions at constant pressure, and include all three combustion regimes: induction, heat release, and equilibration. To illustrate the error variation in the different combustion regimes the species are divided into three types (reactants, intermediates, and products), and error versus time plots are presented for each species type and the temperature. These plots show that CHEMEQ is the most accurate code during induction and early heat release. During late heat release and equilibration, however, the other codes are more accurate. A single global quantity, a mean integrated root-mean-square error, that measures the average error incurred in solving the complete problem is used to compare the accuracy of the codes. Among the codes examined, LSODE is the most accurate for solving chemical kinetics problems. It is also the most efficient code, in the sense that it requires the least computational work to attain a specified accuracy level. An important finding is that use of the algebraic enthalpy conservation equation to compute the temperature can be more accurate and efficient than integrating the temperature differential equation.

  6. Numerical Uncertainties in the Simulation of Reversible Isentropic Processes and Entropy Conservation.

    NASA Astrophysics Data System (ADS)

    Johnson, Donald R.; Lenzen, Allen J.; Zapotocny, Tom H.; Schaack, Todd K.

    2000-11-01

    A challenge common to weather, climate, and seasonal numerical prediction is the need to simulate accurately reversible isentropic processes in combination with appropriate determination of sources/sinks of energy and entropy. Ultimately, this task includes the distribution and transport of internal, gravitational, and kinetic energies, the energies of water substances in all forms, and the related thermodynamic processes of phase changes involved with clouds, including condensation, evaporation, and precipitation processes.All of the processes noted above involve the entropies of matter, radiation, and chemical substances, conservation during transport, and/or changes in entropies by physical processes internal to the atmosphere. With respect to the entropy of matter, a means to study a model's accuracy in simulating internal hydrologic processes is to determine its capability to simulate the appropriate conservation of potential and equivalent potential temperature as surrogates of dry and moist entropy under reversible adiabatic processes in which clouds form, evaporate, and precipitate. In this study, a statistical strategy utilizing the concept of `pure error' is set forth to assess the numerical accuracies of models to simulate reversible processes during 10-day integrations of the global circulation corresponding to the global residence time of water vapor. During the integrations, the sums of squared differences between equivalent potential temperature e numerically simulated by the governing equations of mass, energy, water vapor, and cloud water and a proxy equivalent potential temperature te numerically simulated as a conservative property are monitored. Inspection of the differences of e and te in time and space and the relative frequency distribution of the differences details bias and random errors that develop from nonlinear numerical inaccuracies in the advection and transport of potential temperature and water substances within the global atmosphere.A series of nine global simulations employing various versions of Community Climate Models CCM2 and CCM3-all Eulerian spectral numerics, all semi-Lagrangian numerics, mixed Eulerian spectral, and semi-Lagrangian numerics-and the University of Wisconsin-Madison (UW) isentropic-sigma gridpoint model provides an interesting comparison of numerical accuracies in the simulation of reversibility. By day 10, large bias and random differences were identified in the simulation of reversible processes in all of the models except for the UW isentropic-sigma model. The CCM2 and CCM3 simulations yielded systematic differences that varied zonally, vertically, and temporally. Within the comparison, the UW isentropic-sigma model was superior in transporting water vapor and cloud water/ice and in simulating reversibility involving the conservation of dry and moist entropy. The only relative frequency distribution of differences that appeared optimal, in that the distribution remained unbiased and equilibrated with minimal variance as it remained statistically stationary, was the distribution from the UW isentropic-sigma model. All other distributions revealed nonstationary characteristics with spreading and/or shifting of the maxima as the biases and variances of the numerical differences of e and te amplified.

  7. An arbitrary-order staggered time integrator for the linear acoustic wave equation

    NASA Astrophysics Data System (ADS)

    Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo

    2018-02-01

    We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.

  8. Application of a symmetric total variation diminishing scheme to aerodynamics of rotors

    NASA Astrophysics Data System (ADS)

    Usta, Ebru

    2002-09-01

    The aerodynamics characteristics of rotors in hover have been studied on stretched non-orthogonal grids using spatially high order symmetric total variation diminishing (STVD) schemes. Several companion numerical viscosity terms have been tested. The effects of higher order metrics, higher order load integrations and turbulence effects on the rotor performance have been studied. Where possible, calculations for 1-D and 2-D benchmark problems have been done on uniform grids, and comparisons with exact solutions have been made to understand the dispersion and dissipation characteristics of these algorithms. A baseline finite volume methodology termed TURNS (Transonic Unsteady Rotor Navier-Stokes) is the starting point for this effort. The original TURNS solver solves the 3-D compressible Navier-Stokes equations in an integral form using a third order upwind scheme. It is first or second order accurate in time. In the modified solver, the inviscid flux at a cell face is decomposed into two parts. The first part of the flux is symmetric in space, while the second part consists of an upwind-biased numerical viscosity term. The symmetric part of the flux at the cell face is computed to fourth-, sixth- or eighth order accuracy in space. The numerical viscosity portion of the flux is computed using either a third order accurate MUSCL scheme or a fifth order WENO scheme. A number of results are presented for the two-bladed Caradonna-Tung rotor and for a four-bladed UH-60A rotor in hover. Comparisons with the original TURNS code, and experiments are given. Results are also presented on the effects of metrics calculations, load integration algorithms, and turbulence models on the solution accuracy. A total of 64 combinations were studied in this thesis work. For brevity, only a small subset of results highlighting the most important conclusions are presented. It should be noted that use of higher order formulations did not affect the temporal stability of the algorithm and did not require any reduction in the time step. The calculations show that the solution accuracy increases when the 3 rd order upwind scheme in the baseline algorithm is replaced with 4th and 6th order accurate symmetric flux calculations. A point of diminishing returns is reached as increasingly larger stencils are used on highly stretched grids. The numerical viscosity term, when computed with the third order MUSCL scheme, is very dissipative, and does not resolve the tip vortex well. The WENO5 scheme, on the other hand significantly improves the tip vortex capturing. The STVD6+WENO5 scheme, in particular gave the best combinations of solution accuracy and efficiency on stretched grids. Spatially fourth order accurate metric calculations were found to be beneficial, but should be used in conjunction with a limiter that drops the metric calculation to a second order accuracy in the vicinity of grid discontinuities. High order integration of loads was found to have a beneficial, but small effect on the computed loads. Replacing the Baldwin-Lomax turbulence model with a one equation Spalart-Allmaras model resulted in higher than expected profile power contributions. Nevertheless the one-equation model is recommended for its robustness, its ability to model separated flows at high thrust settings, and the natural manner in which turbulence in the rotor wake may be treated.

  9. An integral formulation for wave propagation on weakly non-uniform potential flows

    NASA Astrophysics Data System (ADS)

    Mancini, Simone; Astley, R. Jeremy; Sinayoko, Samuel; Gabard, Gwénaël; Tournour, Michel

    2016-12-01

    An integral formulation for acoustic radiation in moving flows is presented. It is based on a potential formulation for acoustic radiation on weakly non-uniform subsonic mean flows. This work is motivated by the absence of suitable kernels for wave propagation on non-uniform flow. The integral solution is formulated using a Green's function obtained by combining the Taylor and Lorentz transformations. Although most conventional approaches based on either transform solve the Helmholtz problem in a transformed domain, the current Green's function and associated integral equation are derived in the physical space. A dimensional error analysis is developed to identify the limitations of the current formulation. Numerical applications are performed to assess the accuracy of the integral solution. It is tested as a means of extrapolating a numerical solution available on the outer boundary of a domain to the far field, and as a means of solving scattering problems by rigid surfaces in non-uniform flows. The results show that the error associated with the physical model deteriorates with increasing frequency and mean flow Mach number. However, the error is generated only in the domain where mean flow non-uniformities are significant and is constant in regions where the flow is uniform.

  10. Oblique scattering from radially inhomogeneous dielectric cylinders: An exact Volterra integral equation formulation

    NASA Astrophysics Data System (ADS)

    Tsalamengas, John L.

    2018-07-01

    We study plane-wave electromagnetic scattering by radially and strongly inhomogeneous dielectric cylinders at oblique incidence. The method of analysis relies on an exact reformulation of the underlying field equations as a first-order 4 × 4 system of differential equations and on the ability to restate the associated initial-value problem in the form of a system of coupled linear Volterra integral equations of the second kind. The integral equations so derived are discretized via a sophisticated variant of the Nyström method. The proposed method yields results accurate up to machine precision without relying on approximations. Numerical results and case studies ably demonstrate the efficiency and high accuracy of the algorithms.

  11. Assessment of numerical techniques for unsteady flow calculations

    NASA Technical Reports Server (NTRS)

    Hsieh, Kwang-Chung

    1989-01-01

    The characteristics of unsteady flow motions have long been a serious concern in the study of various fluid dynamic and combustion problems. With the advancement of computer resources, numerical approaches to these problems appear to be feasible. The objective of this paper is to assess the accuracy of several numerical schemes for unsteady flow calculations. In the present study, Fourier error analysis is performed for various numerical schemes based on a two-dimensional wave equation. Four methods sieved from the error analysis are then adopted for further assessment. Model problems include unsteady quasi-one-dimensional inviscid flows, two-dimensional wave propagations, and unsteady two-dimensional inviscid flows. According to the comparison between numerical and exact solutions, although second-order upwind scheme captures the unsteady flow and wave motions quite well, it is relatively more dissipative than sixth-order central difference scheme. Among various numerical approaches tested in this paper, the best performed one is Runge-Kutta method for time integration and six-order central difference for spatial discretization.

  12. Numerical considerations in the development and implementation of constitutive models

    NASA Technical Reports Server (NTRS)

    Haisler, W. E.; Imbrie, P. K.

    1985-01-01

    Several unified constitutive models were tested in uniaxial form by specifying input strain histories and comparing output stress histories. The purpose of the tests was to evaluate several time integration methods with regard to accuracy, stability, and computational economy. The sensitivity of the models to slight changes in input constants was also investigated. Results are presented for In100 at 1350 F and Hastelloy-X at 1800 F.

  13. Statistical Analysis of Atmospheric Forecast Model Accuracy - A Focus on Multiple Atmospheric Variables and Location-Based Analysis

    DTIC Science & Technology

    2014-04-01

    WRF ) model is a numerical weather prediction system designed for operational forecasting and atmospheric research. This report examined WRF model... WRF , weather research and forecasting, atmospheric effects 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF...and Forecasting ( WRF ) model. The authors would also like to thank Ms. Sherry Larson, STS Systems Integration, LLC, ARL Technical Publishing Branch

  14. Fast Numerical Solution of the Plasma Response Matrix for Real-time Ideal MHD Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glasser, Alexander; Kolemen, Egemen; Glasser, Alan H.

    To help effectuate near real-time feedback control of ideal MHD instabilities in tokamak geometries, a parallelized version of A.H. Glasser’s DCON (Direct Criterion of Newcomb) code is developed. To motivate the numerical implementation, we first solve DCON’s δW formulation with a Hamilton-Jacobi theory, elucidating analytical and numerical features of the ideal MHD stability problem. The plasma response matrix is demonstrated to be the solution of an ideal MHD Riccati equation. We then describe our adaptation of DCON with numerical methods natural to solutions of the Riccati equation, parallelizing it to enable its operation in near real-time. We replace DCON’s serial integration of perturbed modes—which satisfy a singular Euler- Lagrange equation—with a domain-decomposed integration of state transition matrices. Output is shown to match results from DCON with high accuracy, and with computation time < 1s. Such computational speed may enable active feedback ideal MHD stability control, especially in plasmas whose ideal MHD equilibria evolve with inductive timescalemore » $$\\tau$$ ≳ 1s—as in ITER. Further potential applications of this theory are discussed.« less

  15. Fast Numerical Solution of the Plasma Response Matrix for Real-time Ideal MHD Control

    DOE PAGES

    Glasser, Alexander; Kolemen, Egemen; Glasser, Alan H.

    2018-03-26

    To help effectuate near real-time feedback control of ideal MHD instabilities in tokamak geometries, a parallelized version of A.H. Glasser’s DCON (Direct Criterion of Newcomb) code is developed. To motivate the numerical implementation, we first solve DCON’s δW formulation with a Hamilton-Jacobi theory, elucidating analytical and numerical features of the ideal MHD stability problem. The plasma response matrix is demonstrated to be the solution of an ideal MHD Riccati equation. We then describe our adaptation of DCON with numerical methods natural to solutions of the Riccati equation, parallelizing it to enable its operation in near real-time. We replace DCON’s serial integration of perturbed modes—which satisfy a singular Euler- Lagrange equation—with a domain-decomposed integration of state transition matrices. Output is shown to match results from DCON with high accuracy, and with computation time < 1s. Such computational speed may enable active feedback ideal MHD stability control, especially in plasmas whose ideal MHD equilibria evolve with inductive timescalemore » $$\\tau$$ ≳ 1s—as in ITER. Further potential applications of this theory are discussed.« less

  16. Dynamic Beam Solutions for Real-Time Simulation and Control Development of Flexible Rockets

    NASA Technical Reports Server (NTRS)

    Su, Weihua; King, Cecilia K.; Clark, Scott R.; Griffin, Edwin D.; Suhey, Jeffrey D.; Wolf, Michael G.

    2016-01-01

    In this study, flexible rockets are structurally represented by linear beams. Both direct and indirect solutions of beam dynamic equations are sought to facilitate real-time simulation and control development for flexible rockets. The direct solution is completed by numerically integrate the beam structural dynamic equation using an explicit Newmark-based scheme, which allows for stable and fast transient solutions to the dynamics of flexile rockets. Furthermore, in the real-time operation, the bending strain of the beam is measured by fiber optical sensors (FOS) at intermittent locations along the span, while both angular velocity and translational acceleration are measured at a single point by the inertial measurement unit (IMU). Another study in this paper is to find the analytical and numerical solutions of the beam dynamics based on the limited measurement data to facilitate the real-time control development. Numerical studies demonstrate the accuracy of these real-time solutions to the beam dynamics. Such analytical and numerical solutions, when integrated with data processing and control algorithms and mechanisms, have the potential to increase launch availability by processing flight data into the flexible launch vehicle's control system.

  17. Fracture analysis of a transversely isotropic high temperature superconductor strip based on real fundamental solutions

    NASA Astrophysics Data System (ADS)

    Gao, Zhiwen; Zhou, Youhe

    2015-04-01

    Real fundamental solution for fracture problem of transversely isotropic high temperature superconductor (HTS) strip is obtained. The superconductor E-J constitutive law is characterized by the Bean model where the critical current density is independent of the flux density. Fracture analysis is performed by the methods of singular integral equations which are solved numerically by Gauss-Lobatto-Chybeshev (GSL) collocation method. To guarantee a satisfactory accuracy, the convergence behavior of the kernel function is investigated. Numerical results of fracture parameters are obtained and the effects of the geometric characteristics, applied magnetic field and critical current density on the stress intensity factors (SIF) are discussed.

  18. A numerical method for determination of source time functions for general three-dimensional rupture propagation

    NASA Technical Reports Server (NTRS)

    Das, S.

    1979-01-01

    A method to determine the displacement and the stress on the crack plane for a three-dimensional shear crack of arbitrary shape propagating in an infinite, homogeneous medium which is linearly elastic everywhere off the crack plane is presented. The main idea of the method is to use a representation theorem in which the displacement at any given point on the crack plane is written as an integral of the traction over the whole crack plane. As a test of the accuracy of the numerical technique, the results are compared with known solutions for two simple cases.

  19. Reliability-Based Control Design for Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.

  20. Static bending deflection and free vibration analysis of moderate thick symmetric laminated plates using multidimensional wave digital filters

    NASA Astrophysics Data System (ADS)

    Tseng, Chien-Hsun

    2018-06-01

    This paper aims to develop a multidimensional wave digital filtering network for predicting static and dynamic behaviors of composite laminate based on the FSDT. The resultant network is, thus, an integrated platform that can perform not only the free vibration but also the bending deflection of moderate thick symmetric laminated plates with low plate side-to-thickness ratios (< = 20). Safeguarded by the Courant-Friedrichs-Levy stability condition with the least restriction in terms of optimization technique, the present method offers numerically high accuracy, stability and efficiency to proceed a wide range of modulus ratios for the FSDT laminated plates. Instead of using a constant shear correction factor (SCF) with a limited numerical accuracy for the bending deflection, an optimum SCF is particularly sought by looking for a minimum ratio of change in the transverse shear energy. This way, it can predict as good results in terms of accuracy for certain cases of bending deflection. Extensive simulation results carried out for the prediction of maximum bending deflection have demonstratively proven that the present method outperforms those based on the higher-order shear deformation and layerwise plate theories. To the best of our knowledge, this is the first work that shows an optimal selection of SCF can significantly increase the accuracy of FSDT-based laminates especially compared to the higher order theory disclaiming any correction. The highest accuracy of overall solution is compared to the 3D elasticity equilibrium one.

  1. Symmetry-plane model of 3D Euler flows: Mapping to regular systems and numerical solutions of blowup

    NASA Astrophysics Data System (ADS)

    Mulungye, Rachel M.; Lucas, Dan; Bustamante, Miguel D.

    2014-11-01

    We introduce a family of 2D models describing the dynamics on the so-called symmetry plane of the full 3D Euler fluid equations. These models depend on a free real parameter and can be solved analytically. For selected representative values of the free parameter, we apply the method introduced in [M.D. Bustamante, Physica D: Nonlinear Phenom. 240, 1092 (2011)] to map the fluid equations bijectively to globally regular systems. By comparing the analytical solutions with the results of numerical simulations, we establish that the numerical simulations of the mapped regular systems are far more accurate than the numerical simulations of the original systems, at the same spatial resolution and CPU time. In particular, the numerical integrations of the mapped regular systems produce robust estimates for the growth exponent and singularity time of the main blowup quantity (vorticity stretching rate), converging well to the analytically-predicted values even beyond the time at which the flow becomes under-resolved (i.e. the reliability time). In contrast, direct numerical integrations of the original systems develop unstable oscillations near the reliability time. We discuss the reasons for this improvement in accuracy, and explain how to extend the analysis to the full 3D case. Supported under the programme for Research in Third Level Institutions (PRTLI) Cycle 5 and co-funded by the European Regional Development Fund.

  2. Non-numeric computation for high eccentricity orbits. [Earth satellite orbit perturbation

    NASA Technical Reports Server (NTRS)

    Sridharan, R.; Renard, M. L.

    1975-01-01

    Geocentric orbits of large eccentricity (e = 0.9 to 0.95) are significantly perturbed in cislunar space by the sun and moon. The time-history of the height of perigee, subsequent to launch, is particularly critical. The determination of 'launch windows' is mostly concerned with preventing the height of perigee from falling below its low initial value before the mission lifetime has elapsed. Between the extremes of high accuracy digital integration of the equations of motion and of using an approximate, but very fast, stability criteria method, this paper is concerned with the developement of a method of intermediate complexity using non-numeric computation. The computer is used as the theory generator to generalize Lidov's theory using six osculating elements. Symbolic integration is completely automatized and the output is a set of condensed formulae well suited for repeated applications in launch window analysis. Examples of applications are given.

  3. (3749) BALAM: A VERY YOUNG MULTIPLE ASTEROID SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vokrouhlicky, David, E-mail: vokrouhl@cesnet.c

    2009-11-20

    Binaries and multiple systems among small bodies in the solar system have received wide attention over the past decade. This is because their observations provide a wealth of data otherwise inaccessible for single objects. We use numerical integration to prove that the multiple asteroid system (3749) Balam is very young, in contrast to its previously assumed age of 0.5-1 Gyr related to the formation of the Flora family. This work is enabled by a fortuitous discovery of a paired component to (3749) Balam. We first show that the proximity of the (3749) Balam and 2009 BR60 orbits is not amore » statistical fluke of otherwise quasi-uniform distribution. Numerical integrations then strengthen the case and allow us to prove that 2009 BR60 separated from the Balam system less than a million years ago. This is the first time the age of a binary asteroid can be estimated with such accuracy.« less

  4. Contour integral method for obtaining the self-energy matrices of electrodes in electron transport calculations

    NASA Astrophysics Data System (ADS)

    Iwase, Shigeru; Futamura, Yasunori; Imakura, Akira; Sakurai, Tetsuya; Tsukamoto, Shigeru; Ono, Tomoya

    2018-05-01

    We propose an efficient computational method for evaluating the self-energy matrices of electrodes to study ballistic electron transport properties in nanoscale systems. To reduce the high computational cost incurred in large systems, a contour integral eigensolver based on the Sakurai-Sugiura method combined with the shifted biconjugate gradient method is developed to solve an exponential-type eigenvalue problem for complex wave vectors. A remarkable feature of the proposed algorithm is that the numerical procedure is very similar to that of conventional band structure calculations. We implement the developed method in the framework of the real-space higher-order finite-difference scheme with nonlocal pseudopotentials. Numerical tests for a wide variety of materials validate the robustness, accuracy, and efficiency of the proposed method. As an illustration of the method, we present the electron transport property of the freestanding silicene with the line defect originating from the reversed buckled phases.

  5. Rapid analysis of scattering from periodic dielectric structures using accelerated Cartesian expansions.

    PubMed

    Baczewski, Andrew D; Miller, Nicholas C; Shanker, Balasubramaniam

    2012-04-01

    The analysis of fields in periodic dielectric structures arise in numerous applications of recent interest, ranging from photonic bandgap structures and plasmonically active nanostructures to metamaterials. To achieve an accurate representation of the fields in these structures using numerical methods, dense spatial discretization is required. This, in turn, affects the cost of analysis, particularly for integral-equation-based methods, for which traditional iterative methods require O(N2) operations, N being the number of spatial degrees of freedom. In this paper, we introduce a method for the rapid solution of volumetric electric field integral equations used in the analysis of doubly periodic dielectric structures. The crux of our method is the accelerated Cartesian expansion algorithm, which is used to evaluate the requisite potentials in O(N) cost. Results are provided that corroborate our claims of acceleration without compromising accuracy, as well as the application of our method to a number of compelling photonics applications.

  6. Computation of type curves for flow to partially penetrating wells in water-table aquifers

    USGS Publications Warehouse

    Moench, Allen F.

    1993-01-01

    Evaluation of Neuman's analytical solution for flow to a well in a homogeneous, anisotropic, water-table aquifer commonly requires large amounts of computation time and can produce inaccurate results for selected combinations of parameters. Large computation times occur because the integrand of a semi-infinite integral involves the summation of an infinite series. Each term of the series requires evaluation of the roots of equations, and the series itself is sometimes slowly convergent. Inaccuracies can result from lack of computer precision or from the use of improper methods of numerical integration. In this paper it is proposed to use a method of numerical inversion of the Laplace transform solution, provided by Neuman, to overcome these difficulties. The solution in Laplace space is simpler in form than the real-time solution; that is, the integrand of the semi-infinite integral does not involve an infinite series or the need to evaluate roots of equations. Because the integrand is evaluated rapidly, advanced methods of numerical integration can be used to improve accuracy with an overall reduction in computation time. The proposed method of computing type curves, for which a partially documented computer program (WTAQ1) was written, was found to reduce computation time by factors of 2 to 20 over the time needed to evaluate the closed-form, real-time solution.

  7. Application of a derivative-free global optimization algorithm to the derivation of a new time integration scheme for the simulation of incompressible turbulence

    NASA Astrophysics Data System (ADS)

    Alimohammadi, Shahrouz; Cavaglieri, Daniele; Beyhaghi, Pooriya; Bewley, Thomas R.

    2016-11-01

    This work applies a recently developed Derivative-free optimization algorithm to derive a new mixed implicit-explicit (IMEX) time integration scheme for Computational Fluid Dynamics (CFD) simulations. This algorithm allows imposing a specified order of accuracy for the time integration and other important stability properties in the form of nonlinear constraints within the optimization problem. In this procedure, the coefficients of the IMEX scheme should satisfy a set of constraints simultaneously. Therefore, the optimization process, at each iteration, estimates the location of the optimal coefficients using a set of global surrogates, for both the objective and constraint functions, as well as a model of the uncertainty function of these surrogates based on the concept of Delaunay triangulation. This procedure has been proven to converge to the global minimum of the constrained optimization problem provided the constraints and objective functions are twice differentiable. As a result, a new third-order, low-storage IMEX Runge-Kutta time integration scheme is obtained with remarkably fast convergence. Numerical tests are then performed leveraging the turbulent channel flow simulations to validate the theoretical order of accuracy and stability properties of the new scheme.

  8. Variational and symplectic integrators for satellite relative orbit propagation including drag

    NASA Astrophysics Data System (ADS)

    Palacios, Leonel; Gurfil, Pini

    2018-04-01

    Orbit propagation algorithms for satellite relative motion relying on Runge-Kutta integrators are non-symplectic—a situation that leads to incorrect global behavior and degraded accuracy. Thus, attempts have been made to apply symplectic methods to integrate satellite relative motion. However, so far all these symplectic propagation schemes have not taken into account the effect of atmospheric drag. In this paper, drag-generalized symplectic and variational algorithms for satellite relative orbit propagation are developed in different reference frames, and numerical simulations with and without the effect of atmospheric drag are presented. It is also shown that high-order versions of the newly-developed variational and symplectic propagators are more accurate and are significantly faster than Runge-Kutta-based integrators, even in the presence of atmospheric drag.

  9. JANUS: a bit-wise reversible integrator for N-body dynamics

    NASA Astrophysics Data System (ADS)

    Rein, Hanno; Tamayo, Daniel

    2018-01-01

    Hamiltonian systems such as the gravitational N-body problem have time-reversal symmetry. However, all numerical N-body integration schemes, including symplectic ones, respect this property only approximately. In this paper, we present the new N-body integrator JANUS , for which we achieve exact time-reversal symmetry by combining integer and floating point arithmetic. JANUS is explicit, formally symplectic and satisfies Liouville's theorem exactly. Its order is even and can be adjusted between two and ten. We discuss the implementation of JANUS and present tests of its accuracy and speed by performing and analysing long-term integrations of the Solar system. We show that JANUS is fast and accurate enough to tackle a broad class of dynamical problems. We also discuss the practical and philosophical implications of running exactly time-reversible simulations.

  10. An arbitrary high-order Discontinuous Galerkin method for elastic waves on unstructured meshes - III. Viscoelastic attenuation

    NASA Astrophysics Data System (ADS)

    Käser, Martin; Dumbser, Michael; de la Puente, Josep; Igel, Heiner

    2007-01-01

    We present a new numerical method to solve the heterogeneous anelastic, seismic wave equations with arbitrary high order accuracy in space and time on 3-D unstructured tetrahedral meshes. Using the velocity-stress formulation provides a linear hyperbolic system of equations with source terms that is completed by additional equations for the anelastic functions including the strain history of the material. These additional equations result from the rheological model of the generalized Maxwell body and permit the incorporation of realistic attenuation properties of viscoelastic material accounting for the behaviour of elastic solids and viscous fluids. The proposed method combines the Discontinuous Galerkin (DG) finite element (FE) method with the ADER approach using Arbitrary high order DERivatives for flux calculations. The DG approach, in contrast to classical FE methods, uses a piecewise polynomial approximation of the numerical solution which allows for discontinuities at element interfaces. Therefore, the well-established theory of numerical fluxes across element interfaces obtained by the solution of Riemann problems can be applied as in the finite volume framework. The main idea of the ADER time integration approach is a Taylor expansion in time in which all time derivatives are replaced by space derivatives using the so-called Cauchy-Kovalewski procedure which makes extensive use of the governing PDE. Due to the ADER time integration technique the same approximation order in space and time is achieved automatically and the method is a one-step scheme advancing the solution for one time step without intermediate stages. To this end, we introduce a new unrolled recursive algorithm for efficiently computing the Cauchy-Kovalewski procedure by making use of the sparsity of the system matrices. The numerical convergence analysis demonstrates that the new schemes provide very high order accuracy even on unstructured tetrahedral meshes while computational cost and storage space for a desired accuracy can be reduced when applying higher degree approximation polynomials. In addition, we investigate the increase in computing time, when the number of relaxation mechanisms due to the generalized Maxwell body are increased. An application to a well-acknowledged test case and comparisons with analytic and reference solutions, obtained by different well-established numerical methods, confirm the performance of the proposed method. Therefore, the development of the highly accurate ADER-DG approach for tetrahedral meshes including viscoelastic material provides a novel, flexible and efficient numerical technique to approach 3-D wave propagation problems including realistic attenuation and complex geometry.

  11. Numerical modeling of the radiative transfer in a turbid medium using the synthetic iteration.

    PubMed

    Budak, Vladimir P; Kaloshin, Gennady A; Shagalov, Oleg V; Zheltov, Victor S

    2015-07-27

    In this paper we propose the fast, but the accurate algorithm for numerical modeling of light fields in the turbid media slab. For the numerical solution of the radiative transfer equation (RTE) it is required its discretization based on the elimination of the solution anisotropic part and the replacement of the scattering integral by a finite sum. The solution regular part is determined numerically. A good choice of the method of the solution anisotropic part elimination determines the high convergence of the algorithm in the mean square metric. The method of synthetic iterations can be used to improve the convergence in the uniform metric. A significant increase in the solution accuracy with the use of synthetic iterations allows applying the two-stream approximation for the regular part determination. This approach permits to generalize the proposed method in the case of an arbitrary 3D geometry of the medium.

  12. Membrane triangles with corner drilling freedoms. III - Implementation and performance evaluation

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.; Alexander, Scott

    1992-01-01

    This paper completes a three-part series on the formulation of 3-node, 9-dof membrane triangles with corner drilling freedoms based on parametrized variational principles. The first four sections cover element implementation details including determination of optimal parameters and treatment of distributed loads. Then three elements of this type, labeled ALL, FF and EFF-ANDES, are tested on standard plane stress problems. ALL represents numerically integrated versions of Allman's 1988 triangle; FF is based on the free formulation triangle presented by Bergan and Felippa in 1985; and EFF-ANDES represent two different formulations of the optimal triangle derived in Parts I and II. The numerical studies indicate that the ALL, FF and EFF-ANDES elements are comparable in accuracy for elements of unitary aspect ratios. The ALL elements are found to stiffen rapidly in inplane bending for high aspect ratios, whereas the FF and EFF elements maintain accuracy. The EFF and ANDES implementations have a moderate edge in formation speed over the FF.

  13. Numerical Algorithms for Precise and Efficient Orbit Propagation and Positioning

    NASA Astrophysics Data System (ADS)

    Bradley, Ben K.

    Motivated by the growing space catalog and the demands for precise orbit determination with shorter latency for science and reconnaissance missions, this research improves the computational performance of orbit propagation through more efficient and precise numerical integration and frame transformation implementations. Propagation of satellite orbits is required for astrodynamics applications including mission design, orbit determination in support of operations and payload data analysis, and conjunction assessment. Each of these applications has somewhat different requirements in terms of accuracy, precision, latency, and computational load. This dissertation develops procedures to achieve various levels of accuracy while minimizing computational cost for diverse orbit determination applications. This is done by addressing two aspects of orbit determination: (1) numerical integration used for orbit propagation and (2) precise frame transformations necessary for force model evaluation and station coordinate rotations. This dissertation describes a recently developed method for numerical integration, dubbed Bandlimited Collocation Implicit Runge-Kutta (BLC-IRK), and compare its efficiency in propagating orbits to existing techniques commonly used in astrodynamics. The BLC-IRK scheme uses generalized Gaussian quadratures for bandlimited functions. It requires significantly fewer force function evaluations than explicit Runge-Kutta schemes and approaches the efficiency of the 8th-order Gauss-Jackson multistep method. Converting between the Geocentric Celestial Reference System (GCRS) and International Terrestrial Reference System (ITRS) is necessary for many applications in astrodynamics, such as orbit propagation, orbit determination, and analyzing geoscience data from satellite missions. This dissertation provides simplifications to the Celestial Intermediate Origin (CIO) transformation scheme and Earth orientation parameter (EOP) storage for use in positioning and orbit propagation, yielding savings in computation time and memory. Orbit propagation and position transformation simulations are analyzed to generate a complete set of recommendations for performing the ITRS/GCRS transformation for a wide range of needs, encompassing real-time on-board satellite operations and precise post-processing applications. In addition, a complete derivation of the ITRS/GCRS frame transformation time-derivative is detailed for use in velocity transformations between the GCRS and ITRS and is applied to orbit propagation in the rotating ITRS. EOP interpolation methods and ocean tide corrections are shown to impact the ITRS/GCRS transformation accuracy at the level of 5 cm and 20 cm on the surface of the Earth and at the Global Positioning System (GPS) altitude, respectively. The precession-nutation and EOP simplifications yield maximum propagation errors of approximately 2 cm and 1 m after 15 minutes and 6 hours in low-Earth orbit (LEO), respectively, while reducing computation time and memory usage. Finally, for orbit propagation in the ITRS, a simplified scheme is demonstrated that yields propagation errors under 5 cm after 15 minutes in LEO. This approach is beneficial for orbit determination based on GPS measurements. We conclude with a summary of recommendations on EOP usage and bias-precession-nutation implementations for achieving a wide range of transformation and propagation accuracies at several altitudes. This comprehensive set of recommendations allows satellite operators, astrodynamicists, and scientists to make informed decisions when choosing the best implementation for their application, balancing accuracy and computational complexity.

  14. INTEGRATION OF PARTICLE-GAS SYSTEMS WITH STIFF MUTUAL DRAG INTERACTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Chao-Chin; Johansen, Anders, E-mail: ccyang@astro.lu.se, E-mail: anders@astro.lu.se

    2016-06-01

    Numerical simulation of numerous mm/cm-sized particles embedded in a gaseous disk has become an important tool in the study of planet formation and in understanding the dust distribution in observed protoplanetary disks. However, the mutual drag force between the gas and the particles can become so stiff—particularly because of small particles and/or strong local solid concentration—that an explicit integration of this system is computationally formidable. In this work, we consider the integration of the mutual drag force in a system of Eulerian gas and Lagrangian solid particles. Despite the entanglement between the gas and the particles under the particle-mesh construct,more » we are able to devise a numerical algorithm that effectively decomposes the globally coupled system of equations for the mutual drag force, and makes it possible to integrate this system on a cell-by-cell basis, which considerably reduces the computational task required. We use an analytical solution for the temporal evolution of each cell to relieve the time-step constraint posed by the mutual drag force, as well as to achieve the highest degree of accuracy. To validate our algorithm, we use an extensive suite of benchmarks with known solutions in one, two, and three dimensions, including the linear growth and the nonlinear saturation of the streaming instability. We demonstrate numerical convergence and satisfactory consistency in all cases. Our algorithm can, for example, be applied to model the evolution of the streaming instability with mm/cm-sized pebbles at high mass loading, which has important consequences for the formation scenarios of planetesimals.« less

  15. Numerical experiment for ultrasonic-measurement-integrated simulation of three-dimensional unsteady blood flow.

    PubMed

    Funamoto, Kenichi; Hayase, Toshiyuki; Saijo, Yoshifumi; Yambe, Tomoyuki

    2008-08-01

    Integration of ultrasonic measurement and numerical simulation is a possible way to break through limitations of existing methods for obtaining complete information on hemodynamics. We herein propose Ultrasonic-Measurement-Integrated (UMI) simulation, in which feedback signals based on the optimal estimation of errors in the velocity vector determined by measured and computed Doppler velocities at feedback points are added to the governing equations. With an eye towards practical implementation of UMI simulation with real measurement data, its efficiency for three-dimensional unsteady blood flow analysis and a method for treating low time resolution of ultrasonic measurement were investigated by a numerical experiment dealing with complicated blood flow in an aneurysm. Even when simplified boundary conditions were applied, the UMI simulation reduced the errors of velocity and pressure to 31% and 53% in the feedback domain which covered the aneurysm, respectively. Local maximum wall shear stress was estimated, showing both the proper position and the value with 1% deviance. A properly designed intermittent feedback applied only at the time when measurement data were obtained had the same computational accuracy as feedback applied at every computational time step. Hence, this feedback method is a possible solution to overcome the insufficient time resolution of ultrasonic measurement.

  16. New trends in Taylor series based applications

    NASA Astrophysics Data System (ADS)

    Kocina, Filip; Šátek, Václav; Veigend, Petr; Nečasová, Gabriela; Valenta, Václav; Kunovský, Jiří

    2016-06-01

    The paper deals with the solution of large system of linear ODEs when minimal comunication among parallel processors is required. The Modern Taylor Series Method (MTSM) is used. The MTSM allows using a higher order during the computation that means a larger integration step size while keeping desired accuracy. As an example of complex systems we can take the Telegraph Equation Model. Symbolic and numeric solutions are compared when harmonic input signal is used.

  17. New precession expressions, valid for long time intervals

    NASA Astrophysics Data System (ADS)

    Vondrák, J.; Capitaine, N.; Wallace, P.

    2011-10-01

    Context. The present IAU model of precession, like its predecessors, is given as a set of polynomial approximations of various precession parameters intended for high-accuracy applications over a limited time span. Earlier comparisons with numerical integrations have shown that this model is valid only for a few centuries around the basic epoch, J2000.0, while for more distant epochs it rapidly diverges from the numerical solution. In our preceding studies we also obtained preliminary developments for the precessional contribution to the motion of the equator: coordinates X,Y of the precessing pole and precession parameters ψA,ωA, suitable for use over long time intervals. Aims: The goal of the present paper is to obtain upgraded developments for various sets of precession angles that would fit modern observations near J2000.0 and at the same time fit numerical integration of the motions of solar system bodies on scales of several thousand centuries. Methods: We used the IAU 2006 solutions to represent the precession of the ecliptic and of the equator close to J2000.0 and, for more distant epochs, a numerical integration using the Mercury 6 package and solutions by Laskar et al. (1993, A&A, 270, 522) with upgraded initial conditions and constants to represent the ecliptic, and general precession and obliquity, respectively. From them, different precession parameters were calculated in the interval ± 200 millennia from J2000.0, and analytical expressions are found that provide a good fit for the whole interval. Results: Series for the various precessional parameters, comprising a cubic polynomial plus from 8 to 14 periodic terms, are derived that allow precession to be computed with an accuracy comparable to IAU 2006 around the central epoch J2000.0, a few arcseconds throughout the historical period, and a few tenths of a degree at the ends of the ± 200 millennia time span. Computer algorithms are provided that compute the ecliptic and mean equator poles and the precession matrix. The Appendix containing the computer code is available in electronic form at http://www.aanda.org

  18. Solving the hypersingular boundary integral equation for the Burton and Miller formulation.

    PubMed

    Langrenne, Christophe; Garcia, Alexandre; Bonnet, Marc

    2015-11-01

    This paper presents an easy numerical implementation of the Burton and Miller (BM) formulation, where the hypersingular Helmholtz integral is regularized by identities from the associated Laplace equation and thus needing only the evaluation of weakly singular integrals. The Helmholtz equation and its normal derivative are combined directly with combinations at edge or corner collocation nodes not used when the surface is not smooth. The hypersingular operators arising in this process are regularized and then evaluated by an indirect procedure based on discretized versions of the Calderón identities linking the integral operators for associated Laplace problems. The method is valid for acoustic radiation and scattering problems involving arbitrarily shaped three-dimensional bodies. Unlike other approaches using direct evaluation of hypersingular integrals, collocation points still coincide with mesh nodes, as is usual when using conforming elements. Using higher-order shape functions (with the boundary element method model size kept fixed) reduces the overall numerical integration effort while increasing the solution accuracy. To reduce the condition number of the resulting BM formulation at low frequencies, a regularized version α = ik/(k(2 )+ λ) of the classical BM coupling factor α = i/k is proposed. Comparisons with the combined Helmholtz integral equation Formulation method of Schenck are made for four example configurations, two of them featuring non-smooth surfaces.

  19. Numerical Convergence in the Dark Matter Halos Properties Using Cosmological Simulations

    NASA Astrophysics Data System (ADS)

    Mosquera-Escobar, X. E.; Muñoz-Cuartas, J. C.

    2017-07-01

    Nowadays, the accepted cosmological model is the so called -Cold Dark Matter (CDM). In such model, the universe is considered to be homogeneous and isotropic, composed of diverse components as the dark matter and dark energy, where the latter is the most abundant one. Dark matter plays an important role because it is responsible for the generation of gravitational potential wells, commonly called dark matter halos. At the end, dark matter halos are characterized by a set of parameters (mass, radius, concentration, spin parameter), these parameters provide valuable information for different studies, such as galaxy formation, gravitational lensing, etc. In this work we use the publicly available code Gadget2 to perform cosmological simulations to find to what extent the numerical parameters of the simu- lations, such as gravitational softening, integration time step and force calculation accuracy affect the physical properties of the dark matter halos. We ran a suite of simulations where these parameters were varied in a systematic way in order to explore accurately their impact on the structural parameters of dark matter halos. We show that the variations on the numerical parameters affect the structural pa- rameters of dark matter halos, such as concentration, virial radius, and concentration. We show that these modifications emerged when structures become non- linear (at redshift 2) for the scale of our simulations, such that these variations affected the formation and evolution structure of halos mainly at later cosmic times. As a quantitative result, we propose which would be the most appropriate values for the numerical parameters of the simulations, such that they do not affect the halo properties that are formed. For force calculation accuracy we suggest values smaller or equal to 0.0001, integration time step smaller o equal to 0.005 and for gravitational softening we propose equal to 1/60th of the mean interparticle distance, these values, correspond to the smaller values in the numerical parameters variations. This is an important numerical exercise, since for instance, it is believed that galaxy structural parameters are strongly dependent on dark matter halo structural parameters.

  20. Fuzzy Adaptive Cubature Kalman Filter for Integrated Navigation Systems.

    PubMed

    Tseng, Chien-Hao; Lin, Sheng-Fuu; Jwo, Dah-Jing

    2016-07-26

    This paper presents a sensor fusion method based on the combination of cubature Kalman filter (CKF) and fuzzy logic adaptive system (FLAS) for the integrated navigation systems, such as the GPS/INS (Global Positioning System/inertial navigation system) integration. The third-degree spherical-radial cubature rule applied in the CKF has been employed to avoid the numerically instability in the system model. In processing navigation integration, the performance of nonlinear filter based estimation of the position and velocity states may severely degrade caused by modeling errors due to dynamics uncertainties of the vehicle. In order to resolve the shortcoming for selecting the process noise covariance through personal experience or numerical simulation, a scheme called the fuzzy adaptive cubature Kalman filter (FACKF) is presented by introducing the FLAS to adjust the weighting factor of the process noise covariance matrix. The FLAS is incorporated into the CKF framework as a mechanism for timely implementing the tuning of process noise covariance matrix based on the information of degree of divergence (DOD) parameter. The proposed FACKF algorithm shows promising accuracy improvement as compared to the extended Kalman filter (EKF), unscented Kalman filter (UKF), and CKF approaches.

  1. Fuzzy Adaptive Cubature Kalman Filter for Integrated Navigation Systems

    PubMed Central

    Tseng, Chien-Hao; Lin, Sheng-Fuu; Jwo, Dah-Jing

    2016-01-01

    This paper presents a sensor fusion method based on the combination of cubature Kalman filter (CKF) and fuzzy logic adaptive system (FLAS) for the integrated navigation systems, such as the GPS/INS (Global Positioning System/inertial navigation system) integration. The third-degree spherical-radial cubature rule applied in the CKF has been employed to avoid the numerically instability in the system model. In processing navigation integration, the performance of nonlinear filter based estimation of the position and velocity states may severely degrade caused by modeling errors due to dynamics uncertainties of the vehicle. In order to resolve the shortcoming for selecting the process noise covariance through personal experience or numerical simulation, a scheme called the fuzzy adaptive cubature Kalman filter (FACKF) is presented by introducing the FLAS to adjust the weighting factor of the process noise covariance matrix. The FLAS is incorporated into the CKF framework as a mechanism for timely implementing the tuning of process noise covariance matrix based on the information of degree of divergence (DOD) parameter. The proposed FACKF algorithm shows promising accuracy improvement as compared to the extended Kalman filter (EKF), unscented Kalman filter (UKF), and CKF approaches. PMID:27472336

  2. Trajectories for High Specific Impulse High Specific Power Deep Space Exploration

    NASA Technical Reports Server (NTRS)

    Polsgrove, Tara; Adams, Robert B.; Brady, Hugh J. (Technical Monitor)

    2002-01-01

    Flight times and deliverable masses for electric and fusion propulsion systems are difficult to approximate. Numerical integration is required for these continuous thrust systems. Many scientists are not equipped with the tools and expertise to conduct interplanetary and interstellar trajectory analysis for their concepts. Several charts plotting the results of well-known trajectory simulation codes were developed and are contained in this paper. These charts illustrate the dependence of time of flight and payload ratio on jet power, initial mass, specific impulse and specific power. These charts are intended to be a tool by which people in the propulsion community can explore the possibilities of their propulsion system concepts. Trajectories were simulated using the tools VARITOP and IPOST. VARITOP is a well known trajectory optimization code that involves numerical integration based on calculus of variations. IPOST has several methods of trajectory simulation; the one used in this paper is Cowell's method for full integration of the equations of motion. An analytical method derived in the companion paper was also evaluated. The accuracy of this method is discussed in the paper.

  3. A new uniformly valid asymptotic integration algorithm for elasto-plastic creep and unified viscoplastic theories including continuum damage

    NASA Technical Reports Server (NTRS)

    Chulya, Abhisak; Walker, Kevin P.

    1991-01-01

    A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.

  4. Operational Solution to the Nonlinear Klein-Gordon Equation

    NASA Astrophysics Data System (ADS)

    Bengochea, G.; Verde-Star, L.; Ortigueira, M.

    2018-05-01

    We obtain solutions of the nonlinear Klein-Gordon equation using a novel operational method combined with the Adomian polynomial expansion of nonlinear functions. Our operational method does not use any integral transforms nor integration processes. We illustrate the application of our method by solving several examples and present numerical results that show the accuracy of the truncated series approximations to the solutions. Supported by Grant SEP-CONACYT 220603, the first author was supported by SEP-PRODEP through the project UAM-PTC-630, the third author was supported by Portuguese National Funds through the FCT Foundation for Science and Technology under the project PEst-UID/EEA/00066/2013

  5. Divergence preserving discrete surface integral methods for Maxwell's curl equations using non-orthogonal unstructured grids

    NASA Technical Reports Server (NTRS)

    Madsen, Niel K.

    1992-01-01

    Several new discrete surface integral (DSI) methods for solving Maxwell's equations in the time-domain are presented. These methods, which allow the use of general nonorthogonal mixed-polyhedral unstructured grids, are direct generalizations of the canonical staggered-grid finite difference method. These methods are conservative in that they locally preserve divergence or charge. Employing mixed polyhedral cells, (hexahedral, tetrahedral, etc.) these methods allow more accurate modeling of non-rectangular structures and objects because the traditional stair-stepped boundary approximations associated with the orthogonal grid based finite difference methods can be avoided. Numerical results demonstrating the accuracy of these new methods are presented.

  6. On the generalized VIP time integral methodology for transient thermal problems

    NASA Technical Reports Server (NTRS)

    Mei, Youping; Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong

    1993-01-01

    The paper describes the development and applicability of a generalized VIrtual-Pulse (VIP) time integral method of computation for thermal problems. Unlike past approaches for general heat transfer computations, and with the advent of high speed computing technology and the importance of parallel computations for efficient use of computing environments, a major motivation via the developments described in this paper is the need for developing explicit computational procedures with improved accuracy and stability characteristics. As a consequence, a new and effective VIP methodology is described which inherits these improved characteristics. Numerical illustrative examples are provided to demonstrate the developments and validate the results obtained for thermal problems.

  7. A new uniformly valid asymptotic integration algorithm for elasto-plastic-creep and unified viscoplastic theories including continuum damage

    NASA Technical Reports Server (NTRS)

    Chulya, A.; Walker, K. P.

    1989-01-01

    A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.

  8. An Improved Theoretical Aerodynamic Derivatives Computer Program for Sounding Rockets

    NASA Technical Reports Server (NTRS)

    Barrowman, J. S.; Fan, D. N.; Obosu, C. B.; Vira, N. R.; Yang, R. J.

    1979-01-01

    The paper outlines a Theoretical Aerodynamic Derivatives (TAD) computer program for computing the aerodynamics of sounding rockets. TAD outputs include normal force, pitching moment and rolling moment coefficient derivatives as well as center-of-pressure locations as a function of the flight Mach number. TAD is applicable to slender finned axisymmetric vehicles at small angles of attack in subsonic and supersonic flows. TAD improvement efforts include extending Mach number regions of applicability, improving accuracy, and replacement of some numerical integration algorithms with closed-form integrations. Key equations used in TAD are summarized and typical TAD outputs are illustrated for a second-stage Tomahawk configuration.

  9. Variational symplectic algorithm for guiding center dynamics in the inner magnetosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Jinxing; Pu Zuyin; Xie Lun

    Charged particle dynamics in magnetosphere has temporal and spatial multiscale; therefore, numerical accuracy over a long integration time is required. A variational symplectic integrator (VSI) [H. Qin and X. Guan, Phys. Rev. Lett. 100, 035006 (2008) and H. Qin, X. Guan, and W. M. Tang, Phys. Plasmas 16, 042510 (2009)] for the guiding-center motion of charged particles in general magnetic field is applied to study the dynamics of charged particles in magnetosphere. Instead of discretizing the differential equations of the guiding-center motion, the action of the guiding-center motion is discretized and minimized to obtain the iteration rules for advancing themore » dynamics. The VSI conserves exactly a discrete Lagrangian symplectic structure and has better numerical properties over a long integration time, compared with standard integrators, such as the standard and adaptive fourth order Runge-Kutta (RK4) methods. Applying the VSI method to guiding-center dynamics in the inner magnetosphere, we can accurately calculate the particles'orbits for an arbitrary long simulating time with good conservation property. When a time-independent convection and corotation electric field is considered, the VSI method can give the accurate single particle orbit, while the RK4 method gives an incorrect orbit due to its intrinsic error accumulation over a long integrating time.« less

  10. Computation of Sound Propagation by Boundary Element Method

    NASA Technical Reports Server (NTRS)

    Guo, Yueping

    2005-01-01

    This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which are used to compare the BEM solutions. The comparisons show very good agreements and validate the accuracy of the BEM approach implemented here.

  11. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    PubMed Central

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-01-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible. PMID:25745272

  12. Seismic waves in heterogeneous material: subcell resolution of the discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Castro, Cristóbal E.; Käser, Martin; Brietzke, Gilbert B.

    2010-07-01

    We present an important extension of the arbitrary high-order discontinuous Galerkin (DG) finite-element method to model 2-D elastic wave propagation in highly heterogeneous material. In this new approach we include space-variable coefficients to describe smooth or discontinuous material variations inside each element using the same numerical approximation strategy as for the velocity-stress variables in the formulation of the elastic wave equation. The combination of the DG method with a time integration scheme based on the solution of arbitrary accuracy derivatives Riemann problems still provides an explicit, one-step scheme which achieves arbitrary high-order accuracy in space and time. Compared to previous formulations the new scheme contains two additional terms in the form of volume integrals. We show that the increasing computational cost per element can be overcompensated due to the improved material representation inside each element as coarser meshes can be used which reduces the total number of elements and therefore computational time to reach a desired error level. We confirm the accuracy of the proposed scheme performing convergence tests and several numerical experiments considering smooth and highly heterogeneous material. As the approximation of the velocity and stress variables in the wave equation and of the material properties in the model can be chosen independently, we investigate the influence of the polynomial material representation on the accuracy of the synthetic seismograms with respect to computational cost. Moreover, we study the behaviour of the new method on strong material discontinuities, in the case where the mesh is not aligned with such a material interface. In this case second-order linear material approximation seems to be the best choice, with higher-order intra-cell approximation leading to potential instable behaviour. For all test cases we validate our solution against the well-established standard fourth-order finite difference and spectral element method.

  13. Statistical computation of tolerance limits

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1993-01-01

    Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.

  14. Mathematical and Numerical Aspects of the Adaptive Fast Multipole Poisson-Boltzmann Solver

    DOE PAGES

    Zhang, Bo; Lu, Benzhuo; Cheng, Xiaolin; ...

    2013-01-01

    This paper summarizes the mathematical and numerical theories and computational elements of the adaptive fast multipole Poisson-Boltzmann (AFMPB) solver. We introduce and discuss the following components in order: the Poisson-Boltzmann model, boundary integral equation reformulation, surface mesh generation, the nodepatch discretization approach, Krylov iterative methods, the new version of fast multipole methods (FMMs), and a dynamic prioritization technique for scheduling parallel operations. For each component, we also remark on feasible approaches for further improvements in efficiency, accuracy and applicability of the AFMPB solver to large-scale long-time molecular dynamics simulations. Lastly, the potential of the solver is demonstrated with preliminary numericalmore » results.« less

  15. Boundary integral equation analysis for suspension of spheres in Stokes flow

    NASA Astrophysics Data System (ADS)

    Corona, Eduardo; Veerapaneni, Shravan

    2018-06-01

    We show that the standard boundary integral operators, defined on the unit sphere, for the Stokes equations diagonalize on a specific set of vector spherical harmonics and provide formulas for their spectra. We also derive analytical expressions for evaluating the operators away from the boundary. When two particle are located close to each other, we use a truncated series expansion to compute the hydrodynamic interaction. On the other hand, we use the standard spectrally accurate quadrature scheme to evaluate smooth integrals on the far-field, and accelerate the resulting discrete sums using the fast multipole method (FMM). We employ this discretization scheme to analyze several boundary integral formulations of interest including those arising in porous media flow, active matter and magneto-hydrodynamics of rigid particles. We provide numerical results verifying the accuracy and scaling of their evaluation.

  16. A New Family of Compact High Order Coupled Time-Space Unconditionally Stable Vertical Advection Schemes

    NASA Astrophysics Data System (ADS)

    Lemarié, F.; Debreu, L.

    2016-02-01

    Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost. To our knowledge no unconditionally stable scheme with such high order accuracy in time and space have been presented so far in the literature. Furthermore, we show how those schemes can be made monotonic without compromising their stability properties.

  17. Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers

    PubMed Central

    Thompson, Clarissa A.; Opfer, John E.

    2016-01-01

    Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy. PMID:26834688

  18. Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers.

    PubMed

    Thompson, Clarissa A; Opfer, John E

    2016-01-01

    Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children's representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.

  19. Approximate solutions of acoustic 3D integral equation and their application to seismic modeling and full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.

    2017-10-01

    Over the recent decades, a number of fast approximate solutions of Lippmann-Schwinger equation, which are more accurate than classic Born and Rytov approximations, were proposed in the field of electromagnetic modeling. Those developments could be naturally extended to acoustic and elastic fields; however, until recently, they were almost unknown in seismology. This paper presents several solutions of this kind applied to acoustic modeling for both lossy and lossless media. We evaluated the numerical merits of those methods and provide an estimation of their numerical complexity. In our numerical realization we use the matrix-free implementation of the corresponding integral operator. We study the accuracy of those approximate solutions and demonstrate, that the quasi-analytical approximation is more accurate, than the Born approximation. Further, we apply the quasi-analytical approximation to the solution of the inverse problem. It is demonstrated that, this approach improves the estimation of the data gradient, comparing to the Born approximation. The developed inversion algorithm is based on the conjugate-gradient type optimization. Numerical model study demonstrates that the quasi-analytical solution significantly reduces computation time of the seismic full-waveform inversion. We also show how the quasi-analytical approximation can be extended to the case of elastic wavefield.

  20. Extraction of MOS VLSI (Very-Large-Scale-Integrated) Circuit Models Including Critical Interconnect Parasitics.

    DTIC Science & Technology

    1987-09-01

    can be reduced substantially, compared to using numerical methods to model inter - " connect parasitics. Although some accuracy might be lost with...conductor widths and spacings listed in Table 2 1 , have been employed for simulation. In the first set of the simulations, planar dielectric inter ...model, there are no restrictions on the iumber ol diele-iric and conductors. andl the shape of the conductors and the dielectric inter - a.e,, In the

  1. On the Far-Zone Electromagnetic Field of a Horizontal Electric Dipole Over an Imperfectly Conducting Half-Space With Extensions to Plasmonics

    NASA Astrophysics Data System (ADS)

    Michalski, Krzysztof A.; Lin, Hung-I.

    2018-01-01

    Second-order asymptotic formulas for the electromagnetic fields of a horizontal electric dipole over an imperfectly conducting half-space are derived using the modified saddle-point method. Application examples are presented for ordinary and plasmonic media, and the accuracy of the new formulation is assessed by comparisons with two alternative state-of-the-art theories and with the rigorous results of numerical integration.

  2. Influence of the starting temperature of calorimetric measurements on the accuracy of determined magnetocaloric effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreno-Ramirez, L. M.; Franco, V.; Conde, A.

    Availability of a restricted heat capacity data range has a clear influence on the accuracy of calculated magnetocaloric effect, as confirmed by both numerical simulations and experimental measurements. Simulations using the Bean-Rodbell model show that, in general, the approximated magnetocaloric effect curves calculated using a linear extrapolation of the data starting from a selected temperature point down to zero kelvin deviate in a non-monotonic way from those correctly calculated by fully integrating the data from near zero temperatures. However, we discovered that a particular temperature range exists where the approximated magnetocaloric calculation provides the same result as the fully integratedmore » one. These specific truncated intervals exist for both first and second order phase transitions and are the same for the adiabatic temperature change and magnetic entropy change curves. Here, the effect of this truncated integration in real samples was confirmed using heat capacity data of Gd metal and Gd 5Si 2Ge 2 compound measured from near zero temperatures.« less

  3. Influence of the starting temperature of calorimetric measurements on the accuracy of determined magnetocaloric effect

    DOE PAGES

    Moreno-Ramirez, L. M.; Franco, V.; Conde, A.; ...

    2018-02-27

    Availability of a restricted heat capacity data range has a clear influence on the accuracy of calculated magnetocaloric effect, as confirmed by both numerical simulations and experimental measurements. Simulations using the Bean-Rodbell model show that, in general, the approximated magnetocaloric effect curves calculated using a linear extrapolation of the data starting from a selected temperature point down to zero kelvin deviate in a non-monotonic way from those correctly calculated by fully integrating the data from near zero temperatures. However, we discovered that a particular temperature range exists where the approximated magnetocaloric calculation provides the same result as the fully integratedmore » one. These specific truncated intervals exist for both first and second order phase transitions and are the same for the adiabatic temperature change and magnetic entropy change curves. Here, the effect of this truncated integration in real samples was confirmed using heat capacity data of Gd metal and Gd 5Si 2Ge 2 compound measured from near zero temperatures.« less

  4. Effect of mesh distortion on the accuracy of transverse shear stresses and their sensitivity coefficients in multilayered composites

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Kim, Yong H.

    1995-01-01

    A study is made of the effect of mesh distortion on the accuracy of transverse shear stresses and their first-order and second-order sensitivity coefficients in multilayered composite panels subjected to mechanical and thermal loads. The panels are discretized by using a two-field degenerate solid element, with the fundamental unknowns consisting of both displacement and strain components, and the displacement components having a linear variation throughout the thickness of the laminate. A two-step computational procedure is used for evaluating the transverse shear stresses. In the first step, the in-plane stresses in the different layers are calculated at the numerical quadrature points for each element. In the second step, the transverse shear stresses are evaluated by using piecewise integration, in the thickness direction, of the three-dimensional equilibrium equations. The same procedure is used for evaluating the sensitivity coefficients of transverse shear stresses. Numerical results are presented showing no noticeable degradation in the accuracy of the in-plane stresses and their sensitivity coefficients with mesh distortion. However, such degradation is observed for the transverse shear stresses and their sensitivity coefficients. The standard of comparison is taken to be the exact solution of the three-dimensional thermoelasticity equations of the panel.

  5. Higher-order time integration of Coulomb collisions in a plasma using Langevin equations

    DOE PAGES

    Dimits, A. M.; Cohen, B. I.; Caflisch, R. E.; ...

    2013-02-08

    The extension of Langevin-equation Monte-Carlo algorithms for Coulomb collisions from the conventional Euler-Maruyama time integration to the next higher order of accuracy, the Milstein scheme, has been developed, implemented, and tested. This extension proceeds via a formulation of the angular scattering directly as stochastic differential equations in the two fixed-frame spherical-coordinate velocity variables. Results from the numerical implementation show the expected improvement [O(Δt) vs. O(Δt 1/2)] in the strong convergence rate both for the speed |v| and angular components of the scattering. An important result is that this improved convergence is achieved for the angular component of the scattering ifmore » and only if the “area-integral” terms in the Milstein scheme are included. The resulting Milstein scheme is of value as a step towards algorithms with both improved accuracy and efficiency. These include both algorithms with improved convergence in the averages (weak convergence) and multi-time-level schemes. The latter have been shown to give a greatly reduced cost for a given overall error level when compared with conventional Monte-Carlo schemes, and their performance is improved considerably when the Milstein algorithm is used for the underlying time advance versus the Euler-Maruyama algorithm. A new method for sampling the area integrals is given which is a simplification of an earlier direct method and which retains high accuracy. Lastly, this method, while being useful in its own right because of its relative simplicity, is also expected to considerably reduce the computational requirements for the direct conditional sampling of the area integrals that is needed for adaptive strong integration.« less

  6. Dose rate calculations around 192Ir brachytherapy sources using a Sievert integration model

    NASA Astrophysics Data System (ADS)

    Karaiskos, P.; Angelopoulos, A.; Baras, P.; Rozaki-Mavrouli, H.; Sandilos, P.; Vlachos, L.; Sakelliou, L.

    2000-02-01

    The classical Sievert integral method is a valuable tool for dose rate calculations around brachytherapy sources, combining simplicity with reasonable computational times. However, its accuracy in predicting dose rate anisotropy around 192 Ir brachytherapy sources has been repeatedly put into question. In this work, we used a primary and scatter separation technique to improve an existing modification of the Sievert integral (Williamson's isotropic scatter model) that determines dose rate anisotropy around commercially available 192 Ir brachytherapy sources. The proposed Sievert formalism provides increased accuracy while maintaining the simplicity and computational time efficiency of the Sievert integral method. To describe transmission within the materials encountered, the formalism makes use of narrow beam attenuation coefficients which can be directly and easily calculated from the initially emitted 192 Ir spectrum. The other numerical parameters required for its implementation, once calculated with the aid of our home-made Monte Carlo simulation code, can be used for any 192 Ir source design. Calculations of dose rate and anisotropy functions with the proposed Sievert expression, around commonly used 192 Ir high dose rate sources and other 192 Ir elongated source designs, are in good agreement with corresponding accurate Monte Carlo results which have been reported by our group and other authors.

  7. How localized is ``local?'' Efficiency vs. accuracy of O(N) domain decomposition in local orbital based all-electron electronic structure theory

    NASA Astrophysics Data System (ADS)

    Havu, Vile; Blum, Volker; Scheffler, Matthias

    2007-03-01

    Numeric atom-centered local orbitals (NAO) are efficient basis sets for all-electron electronic structure theory. The locality of NAO's can be exploited to render (in principle) all operations of the self-consistency cycle O(N). This is straightforward for 3D integrals using domain decomposition into spatially close subsets of integration points, enabling critical computational savings that are effective from ˜tens of atoms (no significant overhead for smaller systems) and make large systems (100s of atoms) computationally feasible. Using a new all-electron NAO-based code,^1 we investigate the quantitative impact of exploiting this locality on two distinct classes of systems: Large light-element molecules [Alanine-based polypeptide chains (Ala)n], and compact transition metal clusters. Strict NAO locality is achieved by imposing a cutoff potential with an onset radius rc, and exploited by appropriately shaped integration domains (subsets of integration points). Conventional tight rc<= 3å have no measurable accuracy impact in (Ala)n, but introduce inaccuracies of 20-30 meV/atom in Cun. The domain shape impacts the computational effort by only 10-20 % for reasonable rc. ^1 V. Blum, R. Gehrke, P. Havu, V. Havu, M. Scheffler, The FHI Ab Initio Molecular Simulations (aims) Project, Fritz-Haber-Institut, Berlin (2006).

  8. Benchmark solution of the dynamic response of a spherical shell at finite strain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Versino, Daniele; Brock, Jerry S.

    2016-09-28

    Our paper describes the development of high fidelity solutions for the study of homogeneous (elastic and inelastic) spherical shells subject to dynamic loading and undergoing finite deformations. The goal of the activity is to provide high accuracy results that can be used as benchmark solutions for the verification of computational physics codes. Furthermore, the equilibrium equations for the geometrically non-linear problem are solved through mode expansion of the displacement field and the boundary conditions are enforced in a strong form. Time integration is performed through high-order implicit Runge–Kutta schemes. Finally, we evaluate accuracy and convergence of the proposed method bymore » means of numerical examples with finite deformations and material non-linearities and inelasticity.« less

  9. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGES

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  10. Finite Element Simulation of Articular Contact Mechanics with Quadratic Tetrahedral Elements

    PubMed Central

    Maas, Steve A.; Ellis, Benjamin J.; Rawlins, David S.; Weiss, Jeffrey A.

    2016-01-01

    Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. PMID:26900037

  11. Neural system for heartbeats recognition using genetically integrated ensemble of classifiers.

    PubMed

    Osowski, Stanislaw; Siwek, Krzysztof; Siroic, Robert

    2011-03-01

    This paper presents the application of genetic algorithm for the integration of neural classifiers combined in the ensemble for the accurate recognition of heartbeat types on the basis of ECG registration. The idea presented in this paper is that using many classifiers arranged in the form of ensemble leads to the increased accuracy of the recognition. In such ensemble the important problem is the integration of all classifiers into one effective classification system. This paper proposes the use of genetic algorithm. It was shown that application of the genetic algorithm is very efficient and allows to reduce significantly the total error of heartbeat recognition. This was confirmed by the numerical experiments performed on the MIT BIH Arrhythmia Database. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Optimal scheme of star observation of missile-borne inertial navigation system/stellar refraction integrated navigation

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Yang, Lie

    2018-05-01

    To achieve accurate and completely autonomous navigation for spacecraft, inertial/celestial integrated navigation gets increasing attention. In this study, a missile-borne inertial/stellar refraction integrated navigation scheme is proposed. Position Dilution of Precision (PDOP) for stellar refraction is introduced and the corresponding equation is derived. Based on the condition when PDOP reaches the minimum value, an optimized observation scheme is proposed. To verify the feasibility of the proposed scheme, numerical simulation is conducted. The results of the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) are compared and impact factors of navigation accuracy are studied in the simulation. The simulation results indicated that the proposed observation scheme has an accurate positioning performance, and the results of EKF and UKF are similar.

  13. Integrated and differential accuracy in resummed cross sections

    DOE PAGES

    Bertolini, Daniele; Solon, Mikhail P.; Walsh, Jonathan R.

    2017-03-30

    Standard QCD resummation techniques provide precise predictions for the spectrum and the cumulant of a given observable. The integrated spectrum and the cumulant differ by higher-order terms which, however, can be numerically significant. Here in this paper we propose a method, which we call the σ-improved scheme, to resolve this issue. It consists of two steps: (i) include higher-order terms in the spectrum to improve the agreement with the cumulant central value, and (ii) employ profile scales that encode correlations between different points to give robust uncertainty estimates for the integrated spectrum. We provide a generic algorithm for determining suchmore » profile scales, and show the application to the thrust distribution in e +e - collisions at NLL'+NLO and NNLL'+NNLO.« less

  14. Optimal scheme of star observation of missile-borne inertial navigation system/stellar refraction integrated navigation.

    PubMed

    Lu, Jiazhen; Yang, Lie

    2018-05-01

    To achieve accurate and completely autonomous navigation for spacecraft, inertial/celestial integrated navigation gets increasing attention. In this study, a missile-borne inertial/stellar refraction integrated navigation scheme is proposed. Position Dilution of Precision (PDOP) for stellar refraction is introduced and the corresponding equation is derived. Based on the condition when PDOP reaches the minimum value, an optimized observation scheme is proposed. To verify the feasibility of the proposed scheme, numerical simulation is conducted. The results of the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) are compared and impact factors of navigation accuracy are studied in the simulation. The simulation results indicated that the proposed observation scheme has an accurate positioning performance, and the results of EKF and UKF are similar.

  15. Numerical methods for coupled fracture problems

    NASA Astrophysics Data System (ADS)

    Viesca, Robert C.; Garagash, Dmitry I.

    2018-04-01

    We consider numerical solutions in which the linear elastic response to an opening- or sliding-mode fracture couples with one or more processes. Classic examples of such problems include traction-free cracks leading to stress singularities or cracks with cohesive-zone strength requirements leading to non-singular stress distributions. These classical problems have characteristic square-root asymptotic behavior for stress, relative displacement, or their derivatives. Prior work has shown that such asymptotics lead to a natural quadrature of the singular integrals at roots of Chebyhsev polynomials of the first, second, third, or fourth kind. We show that such quadratures lead to convenient techniques for interpolation, differentiation, and integration, with the potential for spectral accuracy. We further show that these techniques, with slight amendment, may continue to be used for non-classical problems which lack the classical asymptotic behavior. We consider solutions to example problems of both the classical and non-classical variety (e.g., fluid-driven opening-mode fracture and fault shear rupture driven by thermal weakening), with comparisons to analytical solutions or asymptotes, where available.

  16. Rapid analysis of scattering from periodic dielectric structures using accelerated Cartesian expansions

    DOE PAGES

    Baczewski, Andrew David; Miller, Nicholas C.; Shanker, Balasubramaniam

    2012-03-22

    Here, the analysis of fields in periodic dielectric structures arise in numerous applications of recent interest, ranging from photonic bandgap structures and plasmonically active nanostructures to metamaterials. To achieve an accurate representation of the fields in these structures using numerical methods, dense spatial discretization is required. This, in turn, affects the cost of analysis, particularly for integral-equation-based methods, for which traditional iterative methods require Ο(Ν 2) operations, Ν being the number of spatial degrees of freedom. In this paper, we introduce a method for the rapid solution of volumetric electric field integral equations used in the analysis of doubly periodicmore » dielectric structures. The crux of our method is the accelerated Cartesian expansion algorithm, which is used to evaluate the requisite potentials in Ο(Ν) cost. Results are provided that corroborate our claims of acceleration without compromising accuracy, as well as the application of our method to a number of compelling photonics applications.« less

  17. A fully implicit numerical integration of the relativistic particle equation of motion

    NASA Astrophysics Data System (ADS)

    Pétri, J.

    2017-04-01

    Relativistic strongly magnetized plasmas are produced in laboratories thanks to state-of-the-art laser technology but can naturally be found around compact objects such as neutron stars and black holes. Detailed studies of the behaviour of relativistic plasmas require accurate computations able to catch the full spatial and temporal dynamics of the system. Numerical simulations of ultra-relativistic plasmas face severe restrictions due to limitations in the maximum possible Lorentz factors that current algorithms can reproduce to good accuracy. In order to circumvent this flaw and repel the limit to 9$ , we design a new fully implicit scheme to solve the relativistic particle equation of motion in an external electromagnetic field using a three-dimensional Cartesian geometry. We show some examples of numerical integrations in constant electromagnetic fields to prove the efficiency of our algorithm. The code is also able to follow the electric drift motion for high Lorentz factors. In the most general case of spatially and temporally varying electromagnetic fields, the code performs extremely well, as shown by comparison with exact analytical solutions for the relativistic electrostatic Kepler problem as well as for linearly and circularly polarized plane waves.

  18. Electromagnetic plane-wave pulse transmission into a Lorentz half-space.

    PubMed

    Cartwright, Natalie A

    2011-12-01

    The propagation of an electromagnetic plane-wave signal obliquely incident upon a Lorentz half-space is studied analytically. Time-domain asymptotic expressions that increase in accuracy with propagation distance are derived by application of uniform saddle point methods on the Fourier-Laplace integral representation of the transmitted field. The results are shown to be continuous in time and comparable with numerical calculations of the field. Arrival times and angles of refraction are given for prominent transient pulse features and the steady-state signal.

  19. The National Shipbuilding Research Program. 1997 Ship Production Symposium. Implementation of Integrated CAD/CAM Systems in Small and Medium Sized Shipyards: A Case Study

    DTIC Science & Technology

    1997-04-01

    implied, with respect to the accuracy, completeness or usefulness of the information contained in this report/ manual , or that the use of any information...shipyards throughout the world have introduced various aspects of CAD/CAM piecemeal as substitutes for manual processes, the greatest improvement in...possibility of multiple models of the molded geometry being developed, which would cause the loss of geometry control. Numerous AutoLisp routines were used

  20. Three-dimensional marginal separation

    NASA Technical Reports Server (NTRS)

    Duck, Peter W.

    1988-01-01

    The three dimensional marginal separation of a boundary layer along a line of symmetry is considered. The key equation governing the displacement function is derived, and found to be a nonlinear integral equation in two space variables. This is solved iteratively using a pseudo-spectral approach, based partly in double Fourier space, and partly in physical space. Qualitatively, the results are similar to previously reported two dimensional results (which are also computed to test the accuracy of the numerical scheme); however quantitatively the three dimensional results are much different.

  1. Active Vibration damping of Smart composite beams based on system identification technique

    NASA Astrophysics Data System (ADS)

    Bendine, Kouider; Satla, Zouaoui; Boukhoulda, Farouk Benallel; Nouari, Mohammed

    2018-03-01

    In the present paper, the active vibration control of a composite beam using piezoelectric actuator is investigated. The space state equation is determined using system identification technique based on the structure input output response provided by ANSYS APDL finite element package. The Linear Quadratic (LQG) control law is designed and integrated into ANSYS APDL to perform closed loop simulations. Numerical examples for different types of excitation loads are presented to test the efficiency and the accuracy of the proposed model.

  2. Monte Carlo sampling of Wigner functions and surface hopping quantum dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kube, Susanna; Lasser, Caroline; Weber, Marcus

    2009-04-01

    The article addresses the achievable accuracy for a Monte Carlo sampling of Wigner functions in combination with a surface hopping algorithm for non-adiabatic quantum dynamics. The approximation of Wigner functions is realized by an adaption of the Metropolis algorithm for real-valued functions with disconnected support. The integration, which is necessary for computing values of the Wigner function, uses importance sampling with a Gaussian weight function. The numerical experiments agree with theoretical considerations and show an error of 2-3%.

  3. A stable high-order perturbation of surfaces method for numerical simulation of diffraction problems in triply layered media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Youngjoon, E-mail: hongy@uic.edu; Nicholls, David P., E-mail: davidn@uic.edu

    The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution ofmore » dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.« less

  4. Efficient and stable exponential time differencing Runge-Kutta methods for phase field elastic bending energy models

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoqiang; Ju, Lili; Du, Qiang

    2016-07-01

    The Willmore flow formulated by phase field dynamics based on the elastic bending energy model has been widely used to describe the shape transformation of biological lipid vesicles. In this paper, we develop and investigate some efficient and stable numerical methods for simulating the unconstrained phase field Willmore dynamics and the phase field Willmore dynamics with fixed volume and surface area constraints. The proposed methods can be high-order accurate and are completely explicit in nature, by combining exponential time differencing Runge-Kutta approximations for time integration with spectral discretizations for spatial operators on regular meshes. We also incorporate novel linear operator splitting techniques into the numerical schemes to improve the discrete energy stability. In order to avoid extra numerical instability brought by use of large penalty parameters in solving the constrained phase field Willmore dynamics problem, a modified augmented Lagrange multiplier approach is proposed and adopted. Various numerical experiments are performed to demonstrate accuracy and stability of the proposed methods.

  5. BOKASUN: A fast and precise numerical program to calculate the Master Integrals of the two-loop sunrise diagrams

    NASA Astrophysics Data System (ADS)

    Caffo, Michele; Czyż, Henryk; Gunia, Michał; Remiddi, Ettore

    2009-03-01

    We present the program BOKASUN for fast and precise evaluation of the Master Integrals of the two-loop self-mass sunrise diagram for arbitrary values of the internal masses and the external four-momentum. We use a combination of two methods: a Bernoulli accelerated series expansion and a Runge-Kutta numerical solution of a system of linear differential equations. Program summaryProgram title: BOKASUN Catalogue identifier: AECG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9404 No. of bytes in distributed program, including test data, etc.: 104 123 Distribution format: tar.gz Programming language: FORTRAN77 Computer: Any computer with a Fortran compiler accepting FORTRAN77 standard. Tested on various PC's with LINUX Operating system: LINUX RAM: 120 kbytes Classification: 4.4 Nature of problem: Any integral arising in the evaluation of the two-loop sunrise Feynman diagram can be expressed in terms of a given set of Master Integrals, which should be calculated numerically. The program provides a fast and precise evaluation method of the Master Integrals for arbitrary (but not vanishing) masses and arbitrary value of the external momentum. Solution method: The integrals depend on three internal masses and the external momentum squared p. The method is a combination of an accelerated expansion in 1/p in its (pretty large!) region of fast convergence and of a Runge-Kutta numerical solution of a system of linear differential equations. Running time: To obtain 4 Master Integrals on PC with 2 GHz processor it takes 3 μs for series expansion with pre-calculated coefficients, 80 μs for series expansion without pre-calculated coefficients, from a few seconds up to a few minutes for Runge-Kutta method (depending on the required accuracy and the values of the physical parameters).

  6. Modified Chebyshev Picard Iteration for Efficient Numerical Integration of Ordinary Differential Equations

    NASA Astrophysics Data System (ADS)

    Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.

    2013-09-01

    Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are presented to compare the output from the MCPI library to current state-of-practice numerical integration methods. It is shown that MCPI is capable of out-performing the state-of-practice in terms of computational cost and accuracy.

  7. Efficient isoparametric integration over arbitrary space-filling Voronoi polyhedra for electronic structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alam, Aftab; Khan, S. N.; Wilson, Brian G.

    2011-07-06

    A numerically efficient, accurate, and easily implemented integration scheme over convex Voronoi polyhedra (VP) is presented for use in ab initio electronic-structure calculations. We combine a weighted Voronoi tessellation with isoparametric integration via Gauss-Legendre quadratures to provide rapidly convergent VP integrals for a variety of integrands, including those with a Coulomb singularity. We showcase the capability of our approach by first applying it to an analytic charge-density model achieving machine-precision accuracy with expected convergence properties in milliseconds. For contrast, we compare our results to those using shape-functions and show our approach is greater than 10 5 times faster and 10more » 7 times more accurate. Furthermore, a weighted Voronoi tessellation also allows for a physics-based partitioning of space that guarantees convex, space-filling VP while reflecting accurate atomic size and site charges, as we show within KKR methods applied to Fe-Pd alloys.« less

  8. Stress and Fracture Analyses Under Elastic-plastic and Creep Conditions: Some Basic Developments and Computational Approaches

    NASA Technical Reports Server (NTRS)

    Reed, K. W.; Stonesifer, R. B.; Atluri, S. N.

    1983-01-01

    A new hybrid-stress finite element algorith, suitable for analyses of large quasi-static deformations of inelastic solids, is presented. Principal variables in the formulation are the nominal stress-rate and spin. A such, a consistent reformulation of the constitutive equation is necessary, and is discussed. The finite element equations give rise to an initial value problem. Time integration has been accomplished by Euler and Runge-Kutta schemes and the superior accuracy of the higher order schemes is noted. In the course of integration of stress in time, it has been demonstrated that classical schemes such as Euler's and Runge-Kutta may lead to strong frame-dependence. As a remedy, modified integration schemes are proposed and the potential of the new schemes for suppressing frame dependence of numerically integrated stress is demonstrated. The topic of the development of valid creep fracture criteria is also addressed.

  9. An efficient HZETRN (a galactic cosmic ray transport code)

    NASA Technical Reports Server (NTRS)

    Shinn, Judy L.; Wilson, John W.

    1992-01-01

    An accurate and efficient engineering code for analyzing the shielding requirements against the high-energy galactic heavy ions is needed. The HZETRN is a deterministic code developed at Langley Research Center that is constantly under improvement both in physics and numerical computation and is targeted for such use. One problem area connected with the space-marching technique used in this code is the propagation of the local truncation error. By improving the numerical algorithms for interpolation, integration, and grid distribution formula, the efficiency of the code is increased by a factor of eight as the number of energy grid points is reduced. The numerical accuracy of better than 2 percent for a shield thickness of 150 g/cm(exp 2) is found when a 45 point energy grid is used. The propagating step size, which is related to the perturbation theory, is also reevaluated.

  10. Effects of numerical tolerance levels on an atmospheric chemistry model for mercury

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferris, D.C.; Burns, D.S.; Shuford, J.

    1996-12-31

    A Box Model was developed to investigate the atmospheric oxidation processes of mercury in the environment. Previous results indicated the most important influences on the atmospheric concentration of HgO(g) are (i) the flux of HgO(g) volatilization, which is related to the surface medium, extent of contamination, and temperature, and (ii) the presence of Cl{sub 2} in the atmosphere. The numerical solver which has been incorporated into the ORganic CHemistry Integrated Dispersion (ORCHID) model uses the Livermore Solver of Ordinary Differential Equations (LSODE). In the solution of the ODE`s, LSODE uses numerical tolerances. The tolerances effect computer run time, the relativemore » accuracy of ODE calculated species concentrations and whether or not LSODE converges to a solution using this system of equations. The effects of varying these tolerances on the solution of the box model and the ORCHID model will be discussed.« less

  11. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  12. The discrete adjoint method for parameter identification in multibody system dynamics.

    PubMed

    Lauß, Thomas; Oberpeilsteiner, Stefan; Steiner, Wolfgang; Nachbagauer, Karin

    2018-01-01

    The adjoint method is an elegant approach for the computation of the gradient of a cost function to identify a set of parameters. An additional set of differential equations has to be solved to compute the adjoint variables, which are further used for the gradient computation. However, the accuracy of the numerical solution of the adjoint differential equation has a great impact on the gradient. Hence, an alternative approach is the discrete adjoint method , where the adjoint differential equations are replaced by algebraic equations. Therefore, a finite difference scheme is constructed for the adjoint system directly from the numerical time integration method. The method provides the exact gradient of the discretized cost function subjected to the discretized equations of motion.

  13. Exact analytic solution for the spin-up maneuver of an axially symmetric spacecraft

    NASA Astrophysics Data System (ADS)

    Ventura, Jacopo; Romano, Marcello

    2014-11-01

    The problem of spinning-up an axially symmetric spacecraft subjected to an external torque constant in magnitude and parallel to the symmetry axis is considered. The existing exact analytic solution for an axially symmetric body is applied for the first time to this problem. The proposed solution is valid for any initial conditions of attitude and angular velocity and for any length of time and rotation amplitude. Furthermore, the proposed solution can be numerically evaluated up to any desired level of accuracy. Numerical experiments and comparison with an existing approximated solution and with the integration of the equations of motion are reported in the paper. Finally, a new approximated solution obtained from the exact one is introduced in this paper.

  14. Two-Photon Transitions in Hydrogen-Like Atoms

    NASA Astrophysics Data System (ADS)

    Martinis, Mladen; Stojić, Marko

    Different methods for evaluating two-photon transition amplitudes in hydrogen-like atoms are compared with the improved method of direct summation. Three separate contributions to the two-photon transition probabilities in hydrogen-like atoms are calculated. The first one coming from the summation over discrete intermediate states is performed up to nc(max) = 35. The second contribution from the integration over the continuum states is performed numerically. The third contribution coming from the summation from nc(max) to infinity is calculated in an approximate way using the mean level energy for this region. It is found that the choice of nc(max) controls the numerical error in the calculations and can be used to increase the accuracy of the results much more efficiently than in other methods.

  15. A review and evaluation of numerical tools for fractional calculus and fractional order controls

    NASA Astrophysics Data System (ADS)

    Li, Zhuo; Liu, Lu; Dehghan, Sina; Chen, YangQuan; Xue, Dingyü

    2017-06-01

    In recent years, as fractional calculus becomes more and more broadly used in research across different academic disciplines, there are increasing demands for the numerical tools for the computation of fractional integration/differentiation, and the simulation of fractional order systems. Time to time, being asked about which tool is suitable for a specific application, the authors decide to carry out this survey to present recapitulative information of the available tools in the literature, in hope of benefiting researchers with different academic backgrounds. With this motivation, the present article collects the scattered tools into a dashboard view, briefly introduces their usage and algorithms, evaluates the accuracy, compares the performance, and provides informative comments for selection.

  16. Numerical experiments on the accuracy of ENO and modified ENO schemes

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    1990-01-01

    Further numerical experiments are made assessing an accuracy degeneracy phenomena. A modified essentially non-oscillatory (ENO) scheme is proposed, which recovers the correct order of accuracy for all the test problems with smooth initial conditions and gives comparable results with the original ENO schemes for discontinuous problems.

  17. MeSHLabeler: improving the accuracy of large-scale MeSH indexing by integrating diverse evidence.

    PubMed

    Liu, Ke; Peng, Shengwen; Wu, Junqiu; Zhai, Chengxiang; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2015-06-15

    Medical Subject Headings (MeSHs) are used by National Library of Medicine (NLM) to index almost all citations in MEDLINE, which greatly facilitates the applications of biomedical information retrieval and text mining. To reduce the time and financial cost of manual annotation, NLM has developed a software package, Medical Text Indexer (MTI), for assisting MeSH annotation, which uses k-nearest neighbors (KNN), pattern matching and indexing rules. Other types of information, such as prediction by MeSH classifiers (trained separately), can also be used for automatic MeSH annotation. However, existing methods cannot effectively integrate multiple evidence for MeSH annotation. We propose a novel framework, MeSHLabeler, to integrate multiple evidence for accurate MeSH annotation by using 'learning to rank'. Evidence includes numerous predictions from MeSH classifiers, KNN, pattern matching, MTI and the correlation between different MeSH terms, etc. Each MeSH classifier is trained independently, and thus prediction scores from different classifiers are incomparable. To address this issue, we have developed an effective score normalization procedure to improve the prediction accuracy. MeSHLabeler won the first place in Task 2A of 2014 BioASQ challenge, achieving the Micro F-measure of 0.6248 for 9,040 citations provided by the BioASQ challenge. Note that this accuracy is around 9.15% higher than 0.5724, obtained by MTI. The software is available upon request. © The Author 2015. Published by Oxford University Press.

  18. MeSHLabeler: improving the accuracy of large-scale MeSH indexing by integrating diverse evidence

    PubMed Central

    Liu, Ke; Peng, Shengwen; Wu, Junqiu; Zhai, Chengxiang; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2015-01-01

    Motivation: Medical Subject Headings (MeSHs) are used by National Library of Medicine (NLM) to index almost all citations in MEDLINE, which greatly facilitates the applications of biomedical information retrieval and text mining. To reduce the time and financial cost of manual annotation, NLM has developed a software package, Medical Text Indexer (MTI), for assisting MeSH annotation, which uses k-nearest neighbors (KNN), pattern matching and indexing rules. Other types of information, such as prediction by MeSH classifiers (trained separately), can also be used for automatic MeSH annotation. However, existing methods cannot effectively integrate multiple evidence for MeSH annotation. Methods: We propose a novel framework, MeSHLabeler, to integrate multiple evidence for accurate MeSH annotation by using ‘learning to rank’. Evidence includes numerous predictions from MeSH classifiers, KNN, pattern matching, MTI and the correlation between different MeSH terms, etc. Each MeSH classifier is trained independently, and thus prediction scores from different classifiers are incomparable. To address this issue, we have developed an effective score normalization procedure to improve the prediction accuracy. Results: MeSHLabeler won the first place in Task 2A of 2014 BioASQ challenge, achieving the Micro F-measure of 0.6248 for 9,040 citations provided by the BioASQ challenge. Note that this accuracy is around 9.15% higher than 0.5724, obtained by MTI. Availability and implementation: The software is available upon request. Contact: zhusf@fudan.edu.cn PMID:26072501

  19. Evaluation of the accuracy of the Rotating Parallel Ray Omnidirectional Integration for instantaneous pressure reconstruction from the measured pressure gradient

    NASA Astrophysics Data System (ADS)

    Moreto, Jose; Liu, Xiaofeng

    2017-11-01

    The accuracy of the Rotating Parallel Ray omnidirectional integration for pressure reconstruction from the measured pressure gradient (Liu et al., AIAA paper 2016-1049) is evaluated against both the Circular Virtual Boundary omnidirectional integration (Liu and Katz, 2006 and 2013) and the conventional Poisson equation approach. Dirichlet condition at one boundary point and Neumann condition at all other boundary points are applied to the Poisson solver. A direct numerical simulation database of isotropic turbulence flow (JHTDB), with a homogeneously distributed random noise added to the entire field of DNS pressure gradient, is used to assess the performance of the methods. The random noise, generated by the Matlab function Rand, has a magnitude varying randomly within the range of +/-40% of the maximum DNS pressure gradient. To account for the effect of the noise distribution pattern on the reconstructed pressure accuracy, a total of 1000 different noise distributions achieved by using different random number seeds are involved in the evaluation. Final results after averaging the 1000 realizations show that the error of the reconstructed pressure normalized by the DNS pressure variation range is 0.15 +/-0.07 for the Poisson equation approach, 0.028 +/-0.003 for the Circular Virtual Boundary method and 0.027 +/-0.003 for the Rotating Parallel Ray method, indicating the robustness of the Rotating Parallel Ray method in pressure reconstruction. Sponsor: The San Diego State University UGP program.

  20. Almost analytical Karhunen-Loeve representation of irregular waves based on the prolate spheroidal wave functions

    NASA Astrophysics Data System (ADS)

    Lee, Gibbeum; Cho, Yeunwoo

    2017-11-01

    We present an almost analytical new approach to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of solving this matrix eigenvalue problem purely numerically, which may suffer from the computational inaccuracy for big data, first, we consider a pair of integral and differential equations, which are related to the so-called prolate spheroidal wave functions (PSWF). For the PSWF differential equation, the pair of the eigenvectors (PSWF) and eigenvalues can be obtained from a relatively small number of analytical Legendre functions. Then, the eigenvalues in the PSWF integral equation are expressed in terms of functional values of the PSWF and the eigenvalues of the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data; ordinary irregular waves and rogue waves. We found that the present almost analytical method is better than the conventional data-independent Fourier representation and, also, the conventional direct numerical K-L representation in terms of both accuracy and computational cost. This work was supported by the National Research Foundation of Korea (NRF). (NRF-2017R1D1A1B03028299).

  1. Determination of the Fracture Parameters in a Stiffened Composite Panel

    NASA Technical Reports Server (NTRS)

    Lin, Chung-Yi

    2000-01-01

    A modified J-integral, namely the equivalent domain integral, is derived for a three-dimensional anisotropic cracked solid to evaluate the stress intensity factor along the crack front using the finite element method. Based on the equivalent domain integral method with auxiliary fields, an interaction integral is also derived to extract the second fracture parameter, the T-stress, from the finite element results. The auxiliary fields are the two-dimensional plane strain solutions of monoclinic materials with the plane of symmetry at x(sub 3) = 0 under point loads applied at the crack tip. These solutions are expressed in a compact form based on the Stroh formalism. Both integrals can be implemented into a single numerical procedure to determine the distributions of stress intensity factor and T-stress components, T11, T13, and thus T33, along a three-dimensional crack front. The effects of plate thickness and crack length on the variation of the stress intensity factor and T-stresses through the thickness are investigated in detail for through-thickness center-cracked plates (isotropic and orthotropic) and orthotropic stiffened panels under pure mode-I loading conditions. For all the cases studied, T11 remains negative. For plates with the same dimensions, a larger size of crack yields larger magnitude of the normalized stress intensity factor and normalized T-stresses. The results in orthotropic stiffened panels exhibit an opposite trend in general. As expected, for the thicker panels, the fracture parameters evaluated through the thickness, except the region near the free surfaces, approach two-dimensional plane strain solutions. In summary, the numerical methods presented in this research demonstrate their high computational effectiveness and good numerical accuracy in extracting these fracture parameters from the finite element results in three-dimensional cracked solids.

  2. Some Aspects of Essentially Nonoscillatory (ENO) Formulations for the Euler Equations, Part 3

    NASA Technical Reports Server (NTRS)

    Chakravarthy, Sukumar R.

    1990-01-01

    An essentially nonoscillatory (ENO) formulation is described for hyperbolic systems of conservation laws. ENO approaches are based on smart interpolation to avoid spurious numerical oscillations. ENO schemes are a superset of Total Variation Diminishing (TVD) schemes. In the recent past, TVD formulations were used to construct shock capturing finite difference methods. At extremum points of the solution, TVD schemes automatically reduce to being first-order accurate discretizations locally, while away from extrema they can be constructed to be of higher order accuracy. The new framework helps construct essentially non-oscillatory finite difference methods without recourse to local reductions of accuracy to first order. Thus arbitrarily high orders of accuracy can be obtained. The basic general ideas of the new approach can be specialized in several ways and one specific implementation is described based on: (1) the integral form of the conservation laws; (2) reconstruction based on the primitive functions; (3) extension to multiple dimensions in a tensor product fashion; and (4) Runge-Kutta time integration. The resulting method is fourth-order accurate in time and space and is applicable to uniform Cartesian grids. The construction of such schemes for scalar equations and systems in one and two space dimensions is described along with several examples which illustrate interesting aspects of the new approach.

  3. Discretization of the induced-charge boundary integral equation.

    PubMed

    Bardhan, Jaydeep P; Eisenberg, Robert S; Gillespie, Dirk

    2009-07-01

    Boundary-element methods (BEMs) for solving integral equations numerically have been used in many fields to compute the induced charges at dielectric boundaries. In this paper, we consider a more accurate implementation of BEM in the context of ions in aqueous solution near proteins, but our results are applicable more generally. The ions that modulate protein function are often within a few angstroms of the protein, which leads to the significant accumulation of polarization charge at the protein-solvent interface. Computing the induced charge accurately and quickly poses a numerical challenge in solving a popular integral equation using BEM. In particular, the accuracy of simulations can depend strongly on seemingly minor details of how the entries of the BEM matrix are calculated. We demonstrate that when the dielectric interface is discretized into flat tiles, the qualocation method of Tausch [IEEE Trans Comput.-Comput.-Aided Des. 20, 1398 (2001)] to compute the BEM matrix elements is always more accurate than the traditional centroid-collocation method. Qualocation is not more expensive to implement than collocation and can save significant computational time by reducing the number of boundary elements needed to discretize the dielectric interfaces.

  4. Numerical Optimization of Density Functional Tight Binding Models: Application to Molecules Containing Carbon, Hydrogen, Nitrogen, and Oxygen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.

    New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less

  5. Discretization of the induced-charge boundary integral equation

    NASA Astrophysics Data System (ADS)

    Bardhan, Jaydeep P.; Eisenberg, Robert S.; Gillespie, Dirk

    2009-07-01

    Boundary-element methods (BEMs) for solving integral equations numerically have been used in many fields to compute the induced charges at dielectric boundaries. In this paper, we consider a more accurate implementation of BEM in the context of ions in aqueous solution near proteins, but our results are applicable more generally. The ions that modulate protein function are often within a few angstroms of the protein, which leads to the significant accumulation of polarization charge at the protein-solvent interface. Computing the induced charge accurately and quickly poses a numerical challenge in solving a popular integral equation using BEM. In particular, the accuracy of simulations can depend strongly on seemingly minor details of how the entries of the BEM matrix are calculated. We demonstrate that when the dielectric interface is discretized into flat tiles, the qualocation method of Tausch [IEEE Trans Comput.-Comput.-Aided Des. 20, 1398 (2001)] to compute the BEM matrix elements is always more accurate than the traditional centroid-collocation method. Qualocation is not more expensive to implement than collocation and can save significant computational time by reducing the number of boundary elements needed to discretize the dielectric interfaces.

  6. Enforcing positivity in intrusive PC-UQ methods for reactive ODE systems

    DOE PAGES

    Najm, Habib N.; Valorani, Mauro

    2014-04-12

    We explore the relation between the development of a non-negligible probability of negative states and the instability of numerical integration of the intrusive Galerkin ordinary differential equation system describing uncertain chemical ignition. To prevent this instability without resorting to either multi-element local polynomial chaos (PC) methods or increasing the order of the PC representation in time, we propose a procedure aimed at modifying the amplitude of the PC modes to bring the probability of negative state values below a user-defined threshold. This modification can be effectively described as a filtering procedure of the spectral PC coefficients, which is applied on-the-flymore » during the numerical integration when the current value of the probability of negative states exceeds the prescribed threshold. We demonstrate the filtering procedure using a simple model of an ignition process in a batch reactor. This is carried out by comparing different observables and error measures as obtained by non-intrusive Monte Carlo and Gauss-quadrature integration and the filtered intrusive procedure. Lastly, the filtering procedure has been shown to effectively stabilize divergent intrusive solutions, and also to improve the accuracy of stable intrusive solutions which are close to the stability limits.« less

  7. Numerical Optimization of Density Functional Tight Binding Models: Application to Molecules Containing Carbon, Hydrogen, Nitrogen, and Oxygen

    DOE PAGES

    Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.; ...

    2017-10-17

    New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less

  8. The structure of non-hierarchical triple system stability regions

    NASA Astrophysics Data System (ADS)

    Martynova, A. I.; Orlov, V. V.; Rubinov, A. V.

    2009-08-01

    A detailed study of the two-dimensional initial conditions region section in the planar three-body problem is performed. The initial conditions for the three well-known stable periodic orbits (the Schubart’s orbit, the Broucke’s orbit and the eight-like orbit) belong to this section. Continuous stability regions (for the fixed integration interval) generated by these periodic orbits are found. Zones of the quick stability violation are outlined. The analysis of some concrete trajectories coming from various stability regions is performed. In particular, trajectories possessing varying number of “eights” formed by moving triple system components are discovered. Orbits with librations are also found. The new periodic orbit originated from the zone siding with the Schubart’s orbit region is discovered. This orbit has reversibility points (each of the outer bodies possess a reversibility point) and two points of close double approach of the central body to each of the outer bodies. The influence of the numerical integration accuracy on the results is studied. The stability regions structure is preserved during calculations with different values of the precision parameter, numerical integration methods and regularization algorithms of the equations of motion.

  9. Evaluation of registration accuracy between Sentinel-2 and Landsat 8

    NASA Astrophysics Data System (ADS)

    Barazzetti, Luigi; Cuca, Branka; Previtali, Mattia

    2016-08-01

    Starting from June 2015, Sentinel-2A is delivering high resolution optical images (ground resolution up to 10 meters) to provide a global coverage of the Earth's land surface every 10 days. The planned launch of Sentinel-2B along with the integration of Landsat images will provide time series with an unprecedented revisit time indispensable for numerous monitoring applications, in which high resolution multi-temporal information is required. They include agriculture, water bodies, natural hazards to name a few. However, the combined use of multi-temporal images requires an accurate geometric registration, i.e. pixel-to-pixel correspondence for terrain-corrected products. This paper presents an analysis of spatial co-registration accuracy for several datasets of Sentinel-2 and Landsat 8 images distributed all around the world. Images were compared with digital correlation techniques for image matching, obtaining an evaluation of registration accuracy with an affine transformation as geometrical model. Results demonstrate that sub-pixel accuracy was achieved between 10 m resolution Sentinel-2 bands (band 3) and 15 m resolution panchromatic Landsat images (band 8).

  10. The space-time solution element method: A new numerical approach for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Chang, Sin-Chung

    1995-01-01

    This paper is one of a series of papers describing the development of a new numerical method for the Navier-Stokes equations. Unlike conventional numerical methods, the current method concentrates on the discrete simulation of both the integral and differential forms of the Navier-Stokes equations. Conservation of mass, momentum, and energy in space-time is explicitly provided for through a rigorous enforcement of both the integral and differential forms of the governing conservation laws. Using local polynomial expansions to represent the discrete primitive variables on each cell, fluxes at cell interfaces are evaluated and balanced using exact functional expressions. No interpolation or flux limiters are required. Because of the generality of the current method, it applies equally to the steady and unsteady Navier-Stokes equations. In this paper, we generalize and extend the authors' 2-D, steady state implicit scheme. A general closure methodology is presented so that all terms up through a given order in the local expansions may be retained. The scheme is also extended to nonorthogonal Cartesian grids. Numerous flow fields are computed and results are compared with known solutions. The high accuracy of the scheme is demonstrated through its ability to accurately resolve developing boundary layers on coarse grids. Finally, we discuss applications of the current method to the unsteady Navier-Stokes equations.

  11. A hybrid method combining the surface integral equation method and ray tracing for the numerical simulation of high frequency diffraction involved in ultrasonic NDT

    NASA Astrophysics Data System (ADS)

    Bonnet, M.; Collino, F.; Demaldent, E.; Imperiale, A.; Pesudo, L.

    2018-05-01

    Ultrasonic Non-Destructive Testing (US NDT) has become widely used in various fields of applications to probe media. Exploiting the surface measurements of the ultrasonic incident waves echoes after their propagation through the medium, it allows to detect potential defects (cracks and inhomogeneities) and characterize the medium. The understanding and interpretation of those experimental measurements is performed with the help of numerical modeling and simulations. However, classical numerical methods can become computationally very expensive for the simulation of wave propagation in the high frequency regime. On the other hand, asymptotic techniques are better suited to model high frequency scattering over large distances but nevertheless do not allow accurate simulation of complex diffraction phenomena. Thus, neither numerical nor asymptotic methods can individually solve high frequency diffraction problems in large media, as those involved in UNDT controls, both quickly and accurately, but their advantages and limitations are complementary. Here we propose a hybrid strategy coupling the surface integral equation method and the ray tracing method to simulate high frequency diffraction under speed and accuracy constraints. This strategy is general and applicable to simulate diffraction phenomena in acoustic or elastodynamic media. We provide its implementation and investigate its performances for the 2D acoustic diffraction problem. The main features of this hybrid method are described and results of 2D computational experiments discussed.

  12. An asymptotic-preserving Lagrangian algorithm for the time-dependent anisotropic heat transport equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis; del-Castillo-Negrete, Diego; Hauck, Cory D.

    2014-09-01

    We propose a Lagrangian numerical algorithm for a time-dependent, anisotropic temperature transport equation in magnetized plasmas in the large guide field regime. The approach is based on an analytical integral formal solution of the parallel (i.e., along the magnetic field) transport equation with sources, and it is able to accommodate both local and non-local parallel heat flux closures. The numerical implementation is based on an operator-split formulation, with two straightforward steps: a perpendicular transport step (including sources), and a Lagrangian (field-line integral) parallel transport step. Algorithmically, the first step is amenable to the use of modern iterative methods, while themore » second step has a fixed cost per degree of freedom (and is therefore scalable). Accuracy-wise, the approach is free from the numerical pollution introduced by the discrete parallel transport term when the perpendicular to parallel transport coefficient ratio X ⊥ /X ∥ becomes arbitrarily small, and is shown to capture the correct limiting solution when ε = X⊥L 2 ∥/X1L 2 ⊥ → 0 (with L∥∙ L⊥ , the parallel and perpendicular diffusion length scales, respectively). Therefore, the approach is asymptotic-preserving. We demonstrate the capabilities of the scheme with several numerical experiments with varying magnetic field complexity in two dimensions, including the case of transport across a magnetic island.« less

  13. Revisiting measuring colour gamut of the color-reproducing system: interpretation aspects

    NASA Astrophysics Data System (ADS)

    Sysuev, I. A.; Varepo, L. G.; Trapeznikova, O. V.

    2018-04-01

    According to the ISO standard, the color gamut body volume is used to evaluate the color reproduction quality. The specified volume describes the number of colors that are in a certain area of the color space. There are ways for evaluating the reproduction quality of a multi-colour image using numerical integration methods, but this approach does not provide high accuracy of the analysis. In this connection, the task of increasing the accuracy of the color reproduction evaluation is still relevant. In order to determine the color mass of a color space area, it is suggested to select the necessary color density values from a map corresponding to a given degree of sampling, excluding its mathematical calculations, which reflects the practical significance and novelty of this solution.

  14. Numerical Analysis of Dusty-Gas Flows

    NASA Astrophysics Data System (ADS)

    Saito, T.

    2002-02-01

    This paper presents the development of a numerical code for simulating unsteady dusty-gas flows including shock and rarefaction waves. The numerical results obtained for a shock tube problem are used for validating the accuracy and performance of the code. The code is then extended for simulating two-dimensional problems. Since the interactions between the gas and particle phases are calculated with the operator splitting technique, we can choose numerical schemes independently for the different phases. A semi-analytical method is developed for the dust phase, while the TVD scheme of Harten and Yee is chosen for the gas phase. Throughout this study, computations are carried out on SGI Origin2000, a parallel computer with multiple of RISC based processors. The efficient use of the parallel computer system is an important issue and the code implementation on Origin2000 is also described. Flow profiles of both the gas and solid particles behind the steady shock wave are calculated by integrating the steady conservation equations. The good agreement between the pseudo-stationary solutions and those from the current numerical code validates the numerical approach and the actual coding. The pseudo-stationary shock profiles can also be used as initial conditions of unsteady multidimensional simulations.

  15. On the Coplanar Integrable Case of the Twice-Averaged Hill Problem with Central Body Oblateness

    NASA Astrophysics Data System (ADS)

    Vashkov'yak, M. A.

    2018-01-01

    The twice-averaged Hill problem with the oblateness of the central planet is considered in the case where its equatorial plane coincides with the plane of its orbital motion relative to the perturbing body. A qualitative study of this so-called coplanar integrable case was begun by Y. Kozai in 1963 and continued by M.L. Lidov and M.V. Yarskaya in 1974. However, no rigorous analytical solution of the problem can be obtained due to the complexity of the integrals. In this paper we obtain some quantitative evolution characteristics and propose an approximate constructive-analytical solution of the evolution system in the form of explicit time dependences of satellite orbit elements. The methodical accuracy has been estimated for several orbits of artificial lunar satellites by comparison with the numerical solution of the evolution system.

  16. Scale-adaptive compressive tracking with feature integration

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Li, Jicheng; Chen, Xiao; Li, Shuxin

    2016-05-01

    Numerous tracking-by-detection methods have been proposed for robust visual tracking, among which compressive tracking (CT) has obtained some promising results. A scale-adaptive CT method based on multifeature integration is presented to improve the robustness and accuracy of CT. We introduce a keypoint-based model to achieve the accurate scale estimation, which can additionally give a prior location of the target. Furthermore, by the high efficiency of data-independent random projection matrix, multiple features are integrated into an effective appearance model to construct the naïve Bayes classifier. At last, an adaptive update scheme is proposed to update the classifier conservatively. Experiments on various challenging sequences demonstrate substantial improvements by our proposed tracker over CT and other state-of-the-art trackers in terms of dealing with scale variation, abrupt motion, deformation, and illumination changes.

  17. Finite element simulation of articular contact mechanics with quadratic tetrahedral elements.

    PubMed

    Maas, Steve A; Ellis, Benjamin J; Rawlins, David S; Weiss, Jeffrey A

    2016-03-21

    Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. On the energy integral for first post-Newtonian approximation

    NASA Astrophysics Data System (ADS)

    O'Leary, Joseph; Hill, James M.; Bennett, James C.

    2018-07-01

    The post-Newtonian approximation for general relativity is widely adopted by the geodesy and astronomy communities. It has been successfully exploited for the inclusion of relativistic effects in practically all geodetic applications and techniques such as satellite/lunar laser ranging and very long baseline interferometry. Presently, the levels of accuracy required in geodetic techniques require that reference frames, planetary and satellite orbits and signal propagation be treated within the post-Newtonian regime. For arbitrary scalar W and vector gravitational potentials W^j (j=1,2,3), we present a novel derivation of the energy associated with a test particle in the post-Newtonian regime. The integral so obtained appears not to have been given previously in the literature and is deduced through algebraic manipulation on seeking a Jacobi-like integral associated with the standard post-Newtonian equations of motion. The new integral is independently verified through a variational formulation using the post-Newtonian metric components and is subsequently verified by numerical integration of the post-Newtonian equations of motion.

  19. A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.

  20. Structured approaches to large-scale systems: Variational integrators for interconnected Lagrange-Dirac systems and structured model reduction on Lie groups

    NASA Astrophysics Data System (ADS)

    Parks, Helen Frances

    This dissertation presents two projects related to the structured integration of large-scale mechanical systems. Structured integration uses the considerable differential geometric structure inherent in mechanical motion to inform the design of numerical integration schemes. This process improves the qualitative properties of simulations and becomes especially valuable as a measure of accuracy over long time simulations in which traditional Gronwall accuracy estimates lose their meaning. Often, structured integration schemes replicate continuous symmetries and their associated conservation laws at the discrete level. Such is the case for variational integrators, which discretely replicate the process of deriving equations of motion from variational principles. This results in the conservation of momenta associated to symmetries in the discrete system and conservation of a symplectic form when applicable. In the case of Lagrange-Dirac systems, variational integrators preserve a discrete analogue of the Dirac structure preserved in the continuous flow. In the first project of this thesis, we extend Dirac variational integrators to accommodate interconnected systems. We hope this work will find use in the fields of control, where a controlled system can be thought of as a "plant" system joined to its controller, and in the approach of very large systems, where modular modeling may prove easier than monolithically modeling the entire system. The second project of the thesis considers a different approach to large systems. Given a detailed model of the full system, can we reduce it to a more computationally efficient model without losing essential geometric structures in the system? Asked without the reference to structure, this is the essential question of the field of model reduction. The answer there has been a resounding yes, with Principal Orthogonal Decomposition (POD) with snapshots rising as one of the most successful methods. Our project builds on previous work to extend POD to structured settings. In particular, we consider systems evolving on Lie groups and make use of canonical coordinates in the reduction process. We see considerable improvement in the accuracy of the reduced model over the usual structure-agnostic POD approach.

  1. Solving the hypersingular boundary integral equation in three-dimensional acoustics using a regularization relationship.

    PubMed

    Yan, Zai You; Hung, Kin Chew; Zheng, Hui

    2003-05-01

    Regularization of the hypersingular integral in the normal derivative of the conventional Helmholtz integral equation through a double surface integral method or regularization relationship has been studied. By introducing the new concept of discretized operator matrix, evaluation of the double surface integrals is reduced to calculate the product of two discretized operator matrices. Such a treatment greatly improves the computational efficiency. As the number of frequencies to be computed increases, the computational cost of solving the composite Helmholtz integral equation is comparable to that of solving the conventional Helmholtz integral equation. In this paper, the detailed formulation of the proposed regularization method is presented. The computational efficiency and accuracy of the regularization method are demonstrated for a general class of acoustic radiation and scattering problems. The radiation of a pulsating sphere, an oscillating sphere, and a rigid sphere insonified by a plane acoustic wave are solved using the new method with curvilinear quadrilateral isoparametric elements. It is found that the numerical results rapidly converge to the corresponding analytical solutions as finer meshes are applied.

  2. Rapid transfer alignment of an inertial navigation system using a marginal stochastic integration filter

    NASA Astrophysics Data System (ADS)

    Zhou, Dapeng; Guo, Lei

    2018-01-01

    This study aims to address the rapid transfer alignment (RTA) issue of an inertial navigation system with large misalignment angles. The strong nonlinearity and high dimensionality of the system model pose a significant challenge to the estimation of the misalignment angles. In this paper, a 15-dimensional nonlinear model for RTA has been exploited, and it is shown that the functions for the model description exhibit a conditionally linear substructure. Then, a modified stochastic integration filter (SIF) called marginal SIF (MSIF) is developed to incorporate into the nonlinear model, where the number of sample points is significantly reduced but the estimation accuracy of SIF is retained. Comparisons between the MSIF-based RTA and the previously well-known methodologies are carried out through numerical simulations and a van test. The results demonstrate that the newly proposed method has an obvious accuracy advantage over the extended Kalman filter, the unscented Kalman filter and the marginal unscented Kalman filter. Further, the MSIF achieves a comparable performance to SIF, but with a significantly lower computation load.

  3. Seismic modeling with radial basis function-generated finite differences (RBF-FD) – a simplified treatment of interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Bradley, E-mail: brma7253@colorado.edu; Fornberg, Bengt, E-mail: Fornberg@colorado.edu

    In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy formore » the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.« less

  4. Seismic modeling with radial basis function-generated finite differences (RBF-FD) - a simplified treatment of interfaces

    NASA Astrophysics Data System (ADS)

    Martin, Bradley; Fornberg, Bengt

    2017-04-01

    In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy for the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.

  5. Frozen Gaussian approximation based domain decomposition methods for the linear Schrödinger equation beyond the semi-classical regime

    NASA Astrophysics Data System (ADS)

    Lorin, E.; Yang, X.; Antoine, X.

    2016-06-01

    The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.

  6. A family of compact high order coupled time-space unconditionally stable vertical advection schemes

    NASA Astrophysics Data System (ADS)

    Lemarié, Florian; Debreu, Laurent

    2016-04-01

    Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost.

  7. Computational Approaches to Simulation and Analysis of Large Conformational Transitions in Proteins

    NASA Astrophysics Data System (ADS)

    Seyler, Sean L.

    In a typical living cell, millions to billions of proteins--nanomachines that fluctuate and cycle among many conformational states--convert available free energy into mechanochemical work. A fundamental goal of biophysics is to ascertain how 3D protein structures encode specific functions, such as catalyzing chemical reactions or transporting nutrients into a cell. Protein dynamics span femtosecond timescales (i.e., covalent bond oscillations) to large conformational transition timescales in, and beyond, the millisecond regime (e.g., glucose transport across a phospholipid bilayer). Actual transition events are fast but rare, occurring orders of magnitude faster than typical metastable equilibrium waiting times. Equilibrium molecular dynamics (EqMD) can capture atomistic detail and solute-solvent interactions, but even microseconds of sampling attainable nowadays still falls orders of magnitude short of transition timescales, especially for large systems, rendering observations of such "rare events" difficult or effectively impossible. Advanced path-sampling methods exploit reduced physical models or biasing to produce plausible transitions while balancing accuracy and efficiency, but quantifying their accuracy relative to other numerical and experimental data has been challenging. Indeed, new horizons in elucidating protein function necessitate that present methodologies be revised to more seamlessly and quantitatively integrate a spectrum of methods, both numerical and experimental. In this dissertation, experimental and computational methods are put into perspective using the enzyme adenylate kinase (AdK) as an illustrative example. We introduce Path Similarity Analysis (PSA)--an integrative computational framework developed to quantify transition path similarity. PSA not only reliably distinguished AdK transitions by the originating method, but also traced pathway differences between two methods back to charge-charge interactions (neglected by the stereochemical model, but not the all-atom force field) in several conserved salt bridges. Cryo-electron microscopy maps of the transporter Bor1p are directly incorporated into EqMD simulations using MD flexible fitting to produce viable structural models and infer a plausible transport mechanism. Conforming to the theme of integration, a short compendium of an exploratory project--developing a hybrid atomistic-continuum method--is presented, including initial results and a novel fluctuating hydrodynamics model and corresponding numerical code.

  8. Flap-lag-torsional dynamics of extensional and inextensional rotor blades in hover and in forward flight

    NASA Technical Reports Server (NTRS)

    Dasilva, C.

    1982-01-01

    The reduction of the O(cu epsilon) integro differential equations to ordinary differential equations using a set of orthogonal functions is described. Attention was focused on the hover flight condition. The set of Galerkin integrals that appear in the reduced equations was evaluated by making use of nonrotating beam modes. Although a large amount of computer time was needed to accomplish this task, the Galerkin integrals so evaluated were stored on tape on a permanent basis. Several of the coefficients were also obtained in closed form in order to check the accuracy of the numerical computations. The equilibrium solution to the set of 3n equations obtained was determined as the solution to a minimization problem.

  9. High-precision numerical integration of equations in dynamics

    NASA Astrophysics Data System (ADS)

    Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.

    2018-05-01

    An important requirement for the process of solving differential equations in Dynamics, such as the equations of the motion of celestial bodies and, in particular, the motion of cosmic robotic systems is high accuracy at large time intervals. One of effective tools for obtaining such solutions is the Taylor series method. In this connection, we note that it is very advantageous to reduce the given equations of Dynamics to systems with polynomial (in unknowns) right-hand sides. This allows us to obtain effective algorithms for finding the Taylor coefficients, a priori error estimates at each step of integration, and an optimal choice of the order of the approximation used. In the paper, these questions are discussed and appropriate algorithms are considered.

  10. Sequential ranging integration times in the presence of CW interference in the ranging channel

    NASA Technical Reports Server (NTRS)

    Mathur, Ashok; Nguyen, Tien

    1986-01-01

    The Deep Space Network (DSN), managed by the Jet Propulsion Laboratory for NASA, is used primarily for communication with interplanetary spacecraft. The high sensitivity required to achieve planetary communications makes the DSN very susceptible to radio-frequency interference (RFI). In this paper, an analytical model is presented of the performance degradation of the DSN sequential ranging subsystem in the presence of downlink CW interference in the ranging channel. A trade-off between the ranging component integration times and the ranging signal-to-noise ratio to achieve a desired level of range measurement accuracy and the probability of error in the code components is also presented. Numerical results presented illustrate the required trade-offs under various interference conditions.

  11. Unsteady three-dimensional thermal field prediction in turbine blades using nonlinear BEM

    NASA Technical Reports Server (NTRS)

    Martin, Thomas J.; Dulikravich, George S.

    1993-01-01

    A time-and-space accurate and computationally efficient fully three dimensional unsteady temperature field analysis computer code has been developed for truly arbitrary configurations. It uses boundary element method (BEM) formulation based on an unsteady Green's function approach, multi-point Gaussian quadrature spatial integration on each panel, and a highly clustered time-step integration. The code accepts either temperatures or heat fluxes as boundary conditions that can vary in time on a point-by-point basis. Comparisons of the BEM numerical results and known analytical unsteady results for simple shapes demonstrate very high accuracy and reliability of the algorithm. An example of computed three dimensional temperature and heat flux fields in a realistically shaped internally cooled turbine blade is also discussed.

  12. Strategy for reflector pattern calculation - Let the computer do the work

    NASA Technical Reports Server (NTRS)

    Lam, P. T.; Lee, S.-W.; Hung, C. C.; Acosta, R.

    1986-01-01

    Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. It is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.

  13. Strategy for reflector pattern calculation: Let the computer do the work

    NASA Technical Reports Server (NTRS)

    Lam, P. T.; Lee, S. W.; Hung, C. C.; Acousta, R.

    1985-01-01

    Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. it is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.

  14. Poisson's ratio of fiber-reinforced composites

    NASA Astrophysics Data System (ADS)

    Christiansson, Henrik; Helsing, Johan

    1996-05-01

    Poisson's ratio flow diagrams, that is, the Poisson's ratio versus the fiber fraction, are obtained numerically for hexagonal arrays of elastic circular fibers in an elastic matrix. High numerical accuracy is achieved through the use of an interface integral equation method. Questions concerning fixed point theorems and the validity of existing asymptotic relations are investigated and partially resolved. Our findings for the transverse effective Poisson's ratio, together with earlier results for random systems by other authors, make it possible to formulate a general statement for Poisson's ratio flow diagrams: For composites with circular fibers and where the phase Poisson's ratios are equal to 1/3, the system with the lowest stiffness ratio has the highest Poisson's ratio. For other choices of the elastic moduli for the phases, no simple statement can be made.

  15. A time-efficient implementation of Extended Kalman Filter for sequential orbit determination and a case study for onboard application

    NASA Astrophysics Data System (ADS)

    Tang, Jingshi; Wang, Haihong; Chen, Qiuli; Chen, Zhonggui; Zheng, Jinjun; Cheng, Haowen; Liu, Lin

    2018-07-01

    Onboard orbit determination (OD) is often used in space missions, with which mission support can be partially accomplished autonomously, with less dependency on ground stations. In major Global Navigation Satellite Systems (GNSS), inter-satellite link is also an essential upgrade in the future generations. To serve for autonomous operation, sequential OD method is crucial to provide real-time or near real-time solutions. The Extended Kalman Filter (EKF) is an effective and convenient sequential estimator that is widely used in onboard application. The filter requires the solutions of state transition matrix (STM) and the process noise transition matrix, which are always obtained by numerical integration. However, numerically integrating the differential equations is a CPU intensive process and consumes a large portion of the time in EKF procedures. In this paper, we present an implementation that uses the analytical solutions of these transition matrices to replace the numerical calculations. This analytical implementation is demonstrated and verified using a fictitious constellation based on selected medium Earth orbit (MEO) and inclined Geosynchronous orbit (IGSO) satellites. We show that this implementation performs effectively and converges quickly, steadily and accurately in the presence of considerable errors in the initial values, measurements and force models. The filter is able to converge within 2-4 h of flight time in our simulation. The observation residual is consistent with simulated measurement error, which is about a few centimeters in our scenarios. Compared to results implemented with numerically integrated STM, the analytical implementation shows results with consistent accuracy, while it takes only about half the CPU time to filter a 10-day measurement series. The future possible extensions are also discussed to fit in various missions.

  16. Recent advances in analytical satellite theory

    NASA Technical Reports Server (NTRS)

    Gaposchkin, E. M.

    1978-01-01

    Recent work on analytical satellite perturbation theory has involved the completion of a revision to 4th order for zonal harmonics, the addition of a treatment for ocean tides, an extension of the treatment for the noninertial reference system, and the completion of a theory for direct solar-radiation pressure and earth-albedo pressure. Combined with a theory for tesseral-harmonics, lunisolar, and body-tide perturbations, these formulations provide a comprehensive orbit-computation program. Detailed comparisons with numerical integration and observations are presented to assess the accuracy of each theoretical development.

  17. Region of validity of the finite–temperature Thomas–Fermi model with respect to quantum and exchange corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyachkov, Sergey, E-mail: serj.dyachkov@gmail.com; Moscow Institute of Physics and Technology, 9 Institutskiy per., Dolgoprudny, Moscow Region 141700; Levashov, Pavel, E-mail: pasha@ihed.ras.ru

    We determine the region of applicability of the finite–temperature Thomas–Fermi model and its thermal part with respect to quantum and exchange corrections. Very high accuracy of computations has been achieved by using a special approach for the solution of the boundary problem and numerical integration. We show that the thermal part of the model can be applied at lower temperatures than the full model. Also we offer simple approximations of the boundaries of validity for practical applications.

  18. A mixed shear flexible finite element for the analysis of laminated plates

    NASA Technical Reports Server (NTRS)

    Putcha, N. S.; Reddy, J. N.

    1984-01-01

    A mixed shear flexible finite element based on the Hencky-Mindlin type shear deformation theory of laminated plates is presented and their behavior in bending is investigated. The element consists of three displacements, two rotations, and three moments as the generalized degrees of freedom per node. The numerical convergence and accuracy characteristics of the element are investigated by comparing the finite element solutions with the exact solutions. The present study shows that reduced-order integration of the stiffness coefficients due to shear is necessary to obtain accurate results for thin plates.

  19. A framework for scalable parameter estimation of gene circuit models using structural information.

    PubMed

    Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin

    2013-07-01

    Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.

  20. Integral Equations in Computational Electromagnetics: Formulations, Properties and Isogeometric Analysis

    NASA Astrophysics Data System (ADS)

    Lovell, Amy Elizabeth

    Computational electromagnetics (CEM) provides numerical methods to simulate electromagnetic waves interacting with its environment. Boundary integral equation (BIE) based methods, that solve the Maxwell's equations in the homogeneous or piecewise homogeneous medium, are both efficient and accurate, especially for scattering and radiation problems. Development and analysis electromagnetic BIEs has been a very active topic in CEM research. Indeed, there are still many open problems that need to be addressed or further studied. A short and important list includes (1) closed-form or quasi-analytical solutions to time-domain integral equations, (2) catastrophic cancellations at low frequencies, (3) ill-conditioning due to high mesh density, multi-scale discretization, and growing electrical size, and (4) lack of flexibility due to re-meshing when increasing number of forward numerical simulations are involved in the electromagnetic design process. This dissertation will address those several aspects of boundary integral equations in computational electromagnetics. The first contribution of the dissertation is to construct quasi-analytical solutions to time-dependent boundary integral equations using a direct approach. Direct inverse Fourier transform of the time-harmonic solutions is not stable due to the non-existence of the inverse Fourier transform of spherical Hankel functions. Using new addition theorems for the time-domain Green's function and dyadic Green's functions, time-domain integral equations governing transient scattering problems of spherical objects are solved directly and stably for the first time. Additional, the direct time-dependent solutions, together with the newly proposed time-domain dyadic Green's functions, can enrich the time-domain spherical multipole theory. The second contribution is to create a novel method of moments (MoM) framework to solve electromagnetic boundary integral equation on subdivision surfaces. The aim is to avoid the meshing and re-meshing stages to accelerate the design process when the geometry needs to be updated. Two schemes to construct basis functions on the subdivision surface have been explored. One is to use the div-conforming basis function, and the other one is to create a rigorous iso-geometric approach based on the subdivision basis function with better smoothness properties. This new framework provides us better accuracy, more stability and high flexibility. The third contribution is a new stable integral equation formulation to avoid catastrophic cancellations due to low-frequency breakdown or dense-mesh breakdown. Many of the conventional integral equations and their associated post-processing operations suffer from numerical catastrophic cancellations, which can lead to ill-conditioning of the linear systems or serious accuracy problems. Examples includes low-frequency breakdown and dense mesh breakdown. Another instability may come from nontrivial null spaces of involving integral operators that might be related with spurious resonance or topology breakdown. This dissertation presents several sets of new boundary integral equations and studies their analytical properties. The first proposed formulation leads to the scalar boundary integral equations where only scalar unknowns are involved. Besides the requirements of gaining more stability and better conditioning in the resulting linear systems, multi-physics simulation is another driving force for new formulations. Scalar and vector potentials (rather than electromagnetic field) based formulation have been studied for this purpose. Those new contributions focus on different stages of boundary integral equations in an almost independent manner, e.g. isogeometric analysis framework can be used to solve different boundary integral equations, and the time-dependent solutions to integral equations from different formulations can be achieved through the same methodology proposed.

  1. Fast and high-order numerical algorithms for the solution of multidimensional nonlinear fractional Ginzburg-Landau equation

    NASA Astrophysics Data System (ADS)

    Mohebbi, Akbar

    2018-02-01

    In this paper we propose two fast and accurate numerical methods for the solution of multidimensional space fractional Ginzburg-Landau equation (FGLE). In the presented methods, to avoid solving a nonlinear system of algebraic equations and to increase the accuracy and efficiency of method, we split the complex problem into simpler sub-problems using the split-step idea. For a homogeneous FGLE, we propose a method which has fourth-order of accuracy in time component and spectral accuracy in space variable and for nonhomogeneous one, we introduce another scheme based on the Crank-Nicolson approach which has second-order of accuracy in time variable. Due to using the Fourier spectral method for fractional Laplacian operator, the resulting schemes are fully diagonal and easy to code. Numerical results are reported in terms of accuracy, computational order and CPU time to demonstrate the accuracy and efficiency of the proposed methods and to compare the results with the analytical solutions. The results show that the present methods are accurate and require low CPU time. It is illustrated that the numerical results are in good agreement with the theoretical ones.

  2. Multiscale Mechano-Biological Finite Element Modelling of Oncoplastic Breast Surgery—Numerical Study towards Surgical Planning and Cosmetic Outcome Prediction

    PubMed Central

    Eiben, Bjoern; Hipwell, John H.; Williams, Norman R.; Keshtgar, Mo; Hawkes, David J.

    2016-01-01

    Surgical treatment for early-stage breast carcinoma primarily necessitates breast conserving therapy (BCT), where the tumour is removed while preserving the breast shape. To date, there have been very few attempts to develop accurate and efficient computational tools that could be used in the clinical environment for pre-operative planning and oncoplastic breast surgery assessment. Moreover, from the breast cancer research perspective, there has been very little effort to model complex mechano-biological processes involved in wound healing. We address this by providing an integrated numerical framework that can simulate the therapeutic effects of BCT over the extended period of treatment and recovery. A validated, three-dimensional, multiscale finite element procedure that simulates breast tissue deformations and physiological wound healing is presented. In the proposed methodology, a partitioned, continuum-based mathematical model for tissue recovery and angiogenesis, and breast tissue deformation is considered. The effectiveness and accuracy of the proposed numerical scheme is illustrated through patient-specific representative examples. Wound repair and contraction numerical analyses of real MRI-derived breast geometries are investigated, and the final predictions of the breast shape are validated against post-operative follow-up optical surface scans from four patients. Mean (standard deviation) breast surface distance errors in millimetres of 3.1 (±3.1), 3.2 (±2.4), 2.8 (±2.7) and 4.1 (±3.3) were obtained, demonstrating the ability of the surgical simulation tool to predict, pre-operatively, the outcome of BCT to clinically useful accuracy. PMID:27466815

  3. Interpreting the Coulomb-field approximation for generalized-Born electrostatics using boundary-integral equation theory.

    PubMed

    Bardhan, Jaydeep P

    2008-10-14

    The importance of molecular electrostatic interactions in aqueous solution has motivated extensive research into physical models and numerical methods for their estimation. The computational costs associated with simulations that include many explicit water molecules have driven the development of implicit-solvent models, with generalized-Born (GB) models among the most popular of these. In this paper, we analyze a boundary-integral equation interpretation for the Coulomb-field approximation (CFA), which plays a central role in most GB models. This interpretation offers new insights into the nature of the CFA, which traditionally has been assessed using only a single point charge in the solute. The boundary-integral interpretation of the CFA allows the use of multiple point charges, or even continuous charge distributions, leading naturally to methods that eliminate the interpolation inaccuracies associated with the Still equation. This approach, which we call boundary-integral-based electrostatic estimation by the CFA (BIBEE/CFA), is most accurate when the molecular charge distribution generates a smooth normal displacement field at the solute-solvent boundary, and CFA-based GB methods perform similarly. Conversely, both methods are least accurate for charge distributions that give rise to rapidly varying or highly localized normal displacement fields. Supporting this analysis are comparisons of the reaction-potential matrices calculated using GB methods and boundary-element-method (BEM) simulations. An approximation similar to BIBEE/CFA exhibits complementary behavior, with superior accuracy for charge distributions that generate rapidly varying normal fields and poorer accuracy for distributions that produce smooth fields. This approximation, BIBEE by preconditioning (BIBEE/P), essentially generates initial guesses for preconditioned Krylov-subspace iterative BEMs. Thus, iterative refinement of the BIBEE/P results recovers the BEM solution; excellent agreement is obtained in only a few iterations. The boundary-integral-equation framework may also provide a means to derive rigorous results explaining how the empirical correction terms in many modern GB models significantly improve accuracy despite their simple analytical forms.

  4. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  5. Multicategory reclassification statistics for assessing improvements in diagnostic accuracy

    PubMed Central

    Li, Jialiang; Jiang, Binyan; Fine, Jason P.

    2013-01-01

    In this paper, we extend the definitions of the net reclassification improvement (NRI) and the integrated discrimination improvement (IDI) in the context of multicategory classification. Both measures were proposed in Pencina and others (2008. Evaluating the added predictive ability of a new marker: from area under the receiver operating characteristic (ROC) curve to reclassification and beyond. Statistics in Medicine 27, 157–172) as numeric characterizations of accuracy improvement for binary diagnostic tests and were shown to have certain advantage over analyses based on ROC curves or other regression approaches. Estimation and inference procedures for the multiclass NRI and IDI are provided in this paper along with necessary asymptotic distributional results. Simulations are conducted to study the finite-sample properties of the proposed estimators. Two medical examples are considered to illustrate our methodology. PMID:23197381

  6. Computer analysis of multicircuit shells of revolution by the field method

    NASA Technical Reports Server (NTRS)

    Cohen, G. A.

    1975-01-01

    The field method, presented previously for the solution of even-order linear boundary value problems defined on one-dimensional open branch domains, is extended to boundary value problems defined on one-dimensional domains containing circuits. This method converts the boundary value problem into two successive numerically stable initial value problems, which may be solved by standard forward integration techniques. In addition, a new method for the treatment of singular boundary conditions is presented. This method, which amounts to a partial interchange of the roles of force and displacement variables, is problem independent with respect to both accuracy and speed of execution. This method was implemented in a computer program to calculate the static response of ring stiffened orthotropic multicircuit shells of revolution to asymmetric loads. Solutions are presented for sample problems which illustrate the accuracy and efficiency of the method.

  7. An integral equation formulation for rigid bodies in Stokes flow in three dimensions

    NASA Astrophysics Data System (ADS)

    Corona, Eduardo; Greengard, Leslie; Rachh, Manas; Veerapaneni, Shravan

    2017-03-01

    We present a new derivation of a boundary integral equation (BIE) for simulating the three-dimensional dynamics of arbitrarily-shaped rigid particles of genus zero immersed in a Stokes fluid, on which are prescribed forces and torques. Our method is based on a single-layer representation and leads to a simple second-kind integral equation. It avoids the use of auxiliary sources within each particle that play a role in some classical formulations. We use a spectrally accurate quadrature scheme to evaluate the corresponding layer potentials, so that only a small number of spatial discretization points per particle are required. The resulting discrete sums are computed in O (n) time, where n denotes the number of particles, using the fast multipole method (FMM). The particle positions and orientations are updated by a high-order time-stepping scheme. We illustrate the accuracy, conditioning and scaling of our solvers with several numerical examples.

  8. TRAPR: R Package for Statistical Analysis and Visualization of RNA-Seq Data.

    PubMed

    Lim, Jae Hyun; Lee, Soo Youn; Kim, Ju Han

    2017-03-01

    High-throughput transcriptome sequencing, also known as RNA sequencing (RNA-Seq), is a standard technology for measuring gene expression with unprecedented accuracy. Numerous bioconductor packages have been developed for the statistical analysis of RNA-Seq data. However, these tools focus on specific aspects of the data analysis pipeline, and are difficult to appropriately integrate with one another due to their disparate data structures and processing methods. They also lack visualization methods to confirm the integrity of the data and the process. In this paper, we propose an R-based RNA-Seq analysis pipeline called TRAPR, an integrated tool that facilitates the statistical analysis and visualization of RNA-Seq expression data. TRAPR provides various functions for data management, the filtering of low-quality data, normalization, transformation, statistical analysis, data visualization, and result visualization that allow researchers to build customized analysis pipelines.

  9. Integrated system for automated financial document processing

    NASA Astrophysics Data System (ADS)

    Hassanein, Khaled S.; Wesolkowski, Slawo; Higgins, Ray; Crabtree, Ralph; Peng, Antai

    1997-02-01

    A system was developed that integrates intelligent document analysis with multiple character/numeral recognition engines in order to achieve high accuracy automated financial document processing. In this system, images are accepted in both their grayscale and binary formats. A document analysis module starts by extracting essential features from the document to help identify its type (e.g. personal check, business check, etc.). These features are also utilized to conduct a full analysis of the image to determine the location of interesting zones such as the courtesy amount and the legal amount. These fields are then made available to several recognition knowledge sources such as courtesy amount recognition engines and legal amount recognition engines through a blackboard architecture. This architecture allows all the available knowledge sources to contribute incrementally and opportunistically to the solution of the given recognition query. Performance results on a test set of machine printed business checks using the integrated system are also reported.

  10. A finite element-boundary integral method for scattering and radiation by two- and three-dimensional structures

    NASA Technical Reports Server (NTRS)

    Jin, Jian-Ming; Volakis, John L.; Collins, Jeffery D.

    1991-01-01

    A review of a hybrid finite element-boundary integral formulation for scattering and radiation by two- and three-dimensional composite structures is presented. In contrast to other hybrid techniques involving the finite element method, the proposed one is in principle exact and can be implemented using a low O(N) storage. This is of particular importance for large scale applications and is a characteristic of the boundary chosen to terminate the finite element mesh, usually as close to the structure as possible. A certain class of these boundaries lead to convolutional boundary integrals which can be evaluated via the fast Fourier transform (FFT) without a need to generate a matrix; thus, retaining the O(N) storage requirement. The paper begins with a general description of the method. A number of two- and three-dimensional applications are then given, including numerical computations which demonstrate the method's accuracy, efficiency, and capability.

  11. Boundary particle method for Laplace transformed time fractional diffusion equations

    NASA Astrophysics Data System (ADS)

    Fu, Zhuo-Jia; Chen, Wen; Yang, Hai-Tian

    2013-02-01

    This paper develops a novel boundary meshless approach, Laplace transformed boundary particle method (LTBPM), for numerical modeling of time fractional diffusion equations. It implements Laplace transform technique to obtain the corresponding time-independent inhomogeneous equation in Laplace space and then employs a truly boundary-only meshless boundary particle method (BPM) to solve this Laplace-transformed problem. Unlike the other boundary discretization methods, the BPM does not require any inner nodes, since the recursive composite multiple reciprocity technique (RC-MRM) is used to convert the inhomogeneous problem into the higher-order homogeneous problem. Finally, the Stehfest numerical inverse Laplace transform (NILT) is implemented to retrieve the numerical solutions of time fractional diffusion equations from the corresponding BPM solutions. In comparison with finite difference discretization, the LTBPM introduces Laplace transform and Stehfest NILT algorithm to deal with time fractional derivative term, which evades costly convolution integral calculation in time fractional derivation approximation and avoids the effect of time step on numerical accuracy and stability. Consequently, it can effectively simulate long time-history fractional diffusion systems. Error analysis and numerical experiments demonstrate that the present LTBPM is highly accurate and computationally efficient for 2D and 3D time fractional diffusion equations.

  12. A fast and accurate method for perturbative resummation of transverse momentum-dependent observables

    NASA Astrophysics Data System (ADS)

    Kang, Daekyoung; Lee, Christopher; Vaidya, Varun

    2018-04-01

    We propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons ( γ ∗, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impact parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.

  13. A fast and accurate method for perturbative resummation of transverse momentum-dependent observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Daekyoung; Lee, Christopher; Vaidya, Varun

    Here, we propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons (γ*, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impactmore » parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.« less

  14. A fast and accurate method for perturbative resummation of transverse momentum-dependent observables

    DOE PAGES

    Kang, Daekyoung; Lee, Christopher; Vaidya, Varun

    2018-04-27

    Here, we propose a novel strategy for the perturbative resummation of transverse momentum-dependent (TMD) observables, using the q T spectra of gauge bosons (γ*, Higgs) in pp collisions in the regime of low (but perturbative) transverse momentum q T as a specific example. First we introduce a scheme to choose the factorization scale for virtuality in momentum space instead of in impact parameter space, allowing us to avoid integrating over (or cutting off) a Landau pole in the inverse Fourier transform of the latter to the former. The factorization scale for rapidity is still chosen as a function of impactmore » parameter b, but in such a way designed to obtain a Gaussian form (in ln b) for the exponentiated rapidity evolution kernel, guaranteeing convergence of the b integral. We then apply this scheme to obtain the q T spectra for Drell-Yan and Higgs production at NNLL accuracy. In addition, using this scheme we are able to obtain a fast semi-analytic formula for the perturbative resummed cross sections in momentum space: analytic in its dependence on all physical variables at each order of logarithmic accuracy, up to a numerical expansion for the pure mathematical Bessel function in the inverse Fourier transform that needs to be performed just once for all observables and kinematics, to any desired accuracy.« less

  15. An extension of the finite cell method using boolean operations

    NASA Astrophysics Data System (ADS)

    Abedian, Alireza; Düster, Alexander

    2017-05-01

    In the finite cell method, the fictitious domain approach is combined with high-order finite elements. The geometry of the problem is taken into account by integrating the finite cell formulation over the physical domain to obtain the corresponding stiffness matrix and load vector. In this contribution, an extension of the FCM is presented wherein both the physical and fictitious domain of an element are simultaneously evaluated during the integration. In the proposed extension of the finite cell method, the contribution of the stiffness matrix over the fictitious domain is subtracted from the cell, resulting in the desired stiffness matrix which reflects the contribution of the physical domain only. This method results in an exponential rate of convergence for porous domain problems with a smooth solution and accurate integration. In addition, it reduces the computational cost, especially when applying adaptive integration schemes based on the quadtree/octree. Based on 2D and 3D problems of linear elastostatics, numerical examples serve to demonstrate the efficiency and accuracy of the proposed method.

  16. A new approximation of Fermi-Dirac integrals of order 1/2 for degenerate semiconductor devices

    NASA Astrophysics Data System (ADS)

    AlQurashi, Ahmed; Selvakumar, C. R.

    2018-06-01

    There had been tremendous growth in the field of Integrated circuits (ICs) in the past fifty years. Scaling laws mandated both lateral and vertical dimensions to be reduced and a steady increase in doping densities. Most of the modern semiconductor devices have invariably heavily doped regions where Fermi-Dirac Integrals are required. Several attempts have been devoted to developing analytical approximations for Fermi-Dirac Integrals since numerical computations of Fermi-Dirac Integrals are difficult to use in semiconductor devices, although there are several highly accurate tabulated functions available. Most of these analytical expressions are not sufficiently suitable to be employed in semiconductor device applications due to their poor accuracy, the requirement of complicated calculations, and difficulties in differentiating and integrating. A new approximation has been developed for the Fermi-Dirac integrals of the order 1/2 by using Prony's method and discussed in this paper. The approximation is accurate enough (Mean Absolute Error (MAE) = 0.38%) and easy enough to be used in semiconductor device equations. The new approximation of Fermi-Dirac Integrals is applied to a more generalized Einstein Relation which is an important relation in semiconductor devices.

  17. Real-time Retrieving Atmospheric Parameters from Multi-GNSS Constellations

    NASA Astrophysics Data System (ADS)

    Li, X.; Zus, F.; Lu, C.; Dick, G.; Ge, M.; Wickert, J.; Schuh, H.

    2016-12-01

    The multi-constellation GNSS (e.g. GPS, GLONASS, Galileo, and BeiDou) bring great opportunities and challenges for real-time retrieval of atmospheric parameters for supporting numerical weather prediction (NWP) nowcasting or severe weather event monitoring. In this study, the observations from different GNSS are combined together for atmospheric parameter retrieving based on the real-time precise point positioning technique. The atmospheric parameters retrieved from multi-GNSS observations, including zenith total delay (ZTD), integrated water vapor (IWV), horizontal gradient (especially high-resolution gradient estimates) and slant total delay (STD), are carefully analyzed and evaluated by using the VLBI, radiosonde, water vapor radiometer and numerical weather model to independently validate the performance of individual GNSS and also demonstrate the benefits of multi-constellation GNSS for real-time atmospheric monitoring. Numerous results show that the multi-GNSS processing can provide real-time atmospheric products with higher accuracy, stronger reliability and better distribution, which would be beneficial for atmospheric sounding systems, especially for nowcasting of extreme weather.

  18. 3D fluid-structure modelling and vibration analysis for fault diagnosis of Francis turbine using multiple ANN and multiple ANFIS

    NASA Astrophysics Data System (ADS)

    Saeed, R. A.; Galybin, A. N.; Popov, V.

    2013-01-01

    This paper discusses condition monitoring and fault diagnosis in Francis turbine based on integration of numerical modelling with several different artificial intelligence (AI) techniques. In this study, a numerical approach for fluid-structure (turbine runner) analysis is presented. The results of numerical analysis provide frequency response functions (FRFs) data sets along x-, y- and z-directions under different operating load and different position and size of faults in the structure. To extract features and reduce the dimensionality of the obtained FRF data, the principal component analysis (PCA) has been applied. Subsequently, the extracted features are formulated and fed into multiple artificial neural networks (ANN) and multiple adaptive neuro-fuzzy inference systems (ANFIS) in order to identify the size and position of the damage in the runner and estimate the turbine operating conditions. The results demonstrated the effectiveness of this approach and provide satisfactory accuracy even when the input data are corrupted with certain level of noise.

  19. Modelling wetting and drying effects over complex topography

    NASA Astrophysics Data System (ADS)

    Tchamen, G. W.; Kahawita, R. A.

    1998-06-01

    The numerical simulation of free surface flows that alternately flood and dry out over complex topography is a formidable task. The model equation set generally used for this purpose is the two-dimensional (2D) shallow water wave model (SWWM). Simplified forms of this system such as the zero inertia model (ZIM) can accommodate specific situations like slowly evolving floods over gentle slopes. Classical numerical techniques, such as finite differences (FD) and finite elements (FE), have been used for their integration over the last 20-30 years. Most of these schemes experience some kind of instability and usually fail when some particular domain under specific flow conditions is treated. The numerical instability generally manifests itself in the form of an unphysical negative depth that subsequently causes a run-time error at the computation of the celerity and/or the friction slope. The origins of this behaviour are diverse and may be generally attributed to:1. The use of a scheme that is inappropriate for such complex flow conditions (mixed regimes).2. Improper treatment of a friction source term or a large local curvature in topography.3. Mishandling of a cell that is partially wet/dry.In this paper, a tentative attempt has been made to gain a better understanding of the genesis of the instabilities, their implications and the limits to the proposed solutions. Frequently, the enforcement of robustness is made at the expense of accuracy. The need for a positive scheme, that is, a scheme that always predicts positive depths when run within the constraints of some practical stability limits, is fundamental. It is shown here how a carefully chosen scheme (in this case, an adaptation of the solver to the SWWM) can preserve positive values of water depth under both explicit and implicit time integration, high velocities and complex topography that may include dry areas. However, the treatment of the source terms: friction, Coriolis and particularly the bathymetry, are also of prime importance and must not be overlooked. Linearization with a combination of switching between explicit-implicit integration can overcome the stiffness of the friction and Coriolis terms and provide stable numerical integration. The treatment of the bathymetry source term is much more delicate. For cells undergoing a transient wet-dry process, the imposition of zero velocity stabilizes most of the approximations. However, this artificial zero velocity condition can be the cause of considerable error, especially when fast moving fronts are involved. Besides these difficulties linked with the internal position of the front within a cell versus the limited resolution of a numerical grid, it appears that the second derivative that defines whether the bed is locally convex or concave is a key indicator for stability. A convex bottom may lead to unbounded solutions. It appears that this behaviour is not linked to the numerics (numerical scheme) but rather to the mathematical theory of the SWWM. These concerns about stability have taken precedence, until now, over the crucial and related question of accuracy, especially near a moving front, and how these possible inaccuracies at the leading edge may affect the solution at interior points within the domain.This paper presents an in depth, fully two-dimensional space analysis of the aforementioned problem that has not been addressed before. The purpose of the present communication is not to propose what could be viewed as a final solution, but rather to provide some key considerations that may reveal the ingredients and insight necessary for the development of accurate and robust solutions in the future.

  20. The Numerical Analysis of a Turbulent Compressible Jet. Degree awarded by Ohio State Univ., 2000

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2001-01-01

    A numerical method to simulate high Reynolds number jet flows was formulated and applied to gain a better understanding of the flow physics. Large-eddy simulation was chosen as the most promising approach to model the turbulent structures due to its compromise between accuracy and computational expense. The filtered Navier-Stokes equations were developed including a total energy form of the energy equation. Subgrid scale models for the momentum and energy equations were adapted from compressible forms of Smagorinsky's original model. The effect of using disparate temporal and spatial accuracy in a numerical scheme was discovered through one-dimensional model problems and a new uniformly fourth-order accurate numerical method was developed. Results from two- and three-dimensional validation exercises show that the code accurately reproduces both viscous and inviscid flows. Numerous axisymmetric jet simulations were performed to investigate the effect of grid resolution, numerical scheme, exit boundary conditions and subgrid scale modeling on the solution and the results were used to guide the three-dimensional calculations. Three-dimensional calculations of a Mach 1.4 jet showed that this LES simulation accurately captures the physics of the turbulent flow. The agreement with experimental data was relatively good and is much better than results in the current literature. Turbulent intensities indicate that the turbulent structures at this level of modeling are not isotropic and this information could lend itself to the development of improved subgrid scale models for LES and turbulence models for RANS simulations. A two point correlation technique was used to quantify the turbulent structures. Two point space correlations were used to obtain a measure of the integral length scale, which proved to be approximately 1/2 D(sub j). Two point space-time correlations were used to obtain the convection velocity for the turbulent structures. This velocity ranged from 0.57 to 0.71 U(sub j).

  1. Scan-To Output Validation: Towards a Standardized Geometric Quality Assessment of Building Information Models Based on Point Clouds

    NASA Astrophysics Data System (ADS)

    Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.

    2017-11-01

    The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.

  2. The Laguerre finite difference one-way equation solver

    NASA Astrophysics Data System (ADS)

    Terekhov, Andrew V.

    2017-05-01

    This paper presents a new finite difference algorithm for solving the 2D one-way wave equation with a preliminary approximation of a pseudo-differential operator by a system of partial differential equations. As opposed to the existing approaches, the integral Laguerre transform instead of Fourier transform is used. After carrying out the approximation of spatial variables it is possible to obtain systems of linear algebraic equations with better computing properties and to reduce computer costs for their solution. High accuracy of calculations is attained at the expense of employing finite difference approximations of higher accuracy order that are based on the dispersion-relationship-preserving method and the Richardson extrapolation in the downward continuation direction. The numerical experiments have verified that as compared to the spectral difference method based on Fourier transform, the new algorithm allows one to calculate wave fields with a higher degree of accuracy and a lower level of numerical noise and artifacts including those for non-smooth velocity models. In the context of solving the geophysical problem the post-stack migration for velocity models of the types Syncline and Sigsbee2A has been carried out. It is shown that the images obtained contain lesser noise and are considerably better focused as compared to those obtained by the known Fourier Finite Difference and Phase-Shift Plus Interpolation methods. There is an opinion that purely finite difference approaches do not allow carrying out the seismic migration procedure with sufficient accuracy, however the results obtained disprove this statement. For the supercomputer implementation it is proposed to use the parallel dichotomy algorithm when solving systems of linear algebraic equations with block-tridiagonal matrices.

  3. MEKS: A program for computation of inclusive jet cross sections at hadron colliders

    NASA Astrophysics Data System (ADS)

    Gao, Jun; Liang, Zhihua; Soper, Davison E.; Lai, Hung-Liang; Nadolsky, Pavel M.; Yuan, C.-P.

    2013-06-01

    EKS is a numerical program that predicts differential cross sections for production of single-inclusive hadronic jets and jet pairs at next-to-leading order (NLO) accuracy in a perturbative QCD calculation. We describe MEKS 1.0, an upgraded EKS program with increased numerical precision, suitable for comparisons to the latest experimental data from the Large Hadron Collider and Tevatron. The program integrates the regularized patron-level matrix elements over the kinematical phase space for production of two and three partons using the VEGAS algorithm. It stores the generated weighted events in finely binned two-dimensional histograms for fast offline analysis. A user interface allows one to customize computation of inclusive jet observables. Results of a benchmark comparison of the MEKS program and the commonly used FastNLO program are also documented. Program SummaryProgram title: MEKS 1.0 Catalogue identifier: AEOX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9234 No. of bytes in distributed program, including test data, etc.: 51997 Distribution format: tar.gz Programming language: Fortran (main program), C (CUBA library and analysis program). Computer: All. Operating system: Any UNIX-like system. RAM: ˜300 MB Classification: 11.1. External routines: LHAPDF (https://lhapdf.hepforge.org/) Nature of problem: Computation of differential cross sections for inclusive production of single hadronic jets and jet pairs at next-to-leading order accuracy in perturbative quantum chromodynamics. Solution method: Upon subtraction of infrared singularities, the hard-scattering matrix elements are integrated over available phase space using an optimized VEGAS algorithm. Weighted events are generated and filled into a finely binned two-dimensional histogram, from which the final cross sections with typical experimental binning and cuts are computed by an independent analysis program. Monte Carlo sampling of event weights is tuned automatically to get better efficiency. Running time: Depends on details of the calculation and sought numerical accuracy. See benchmark performance in Section 4. The tests provided take approximately 27 min for the jetbin run and a few seconds for jetana.

  4. Terrain Correction on the moving equal area cylindrical map projection of the surface of a reference ellipsoid

    NASA Astrophysics Data System (ADS)

    Ardalan, A.; Safari, A.; Grafarend, E.

    2003-04-01

    An operational algorithm for computing the ellipsoidal terrain correction based on application of closed form solution of the Newton integral in terms of Cartesian coordinates in the cylindrical equal area map projected surface of a reference ellipsoid has been developed. As the first step the mapping of the points on the surface of a reference ellipsoid onto the cylindrical equal area map projection of a cylinder tangent to a point on the surface of reference ellipsoid closely studied and the map projection formulas are computed. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid is considered and the gravitational potential and the vector of gravitational intensity of these mass elements has been computed via the solution of Newton integral in terms of ellipsoidal coordinates. The geographical cross section areas of the selected ellipsoidal mass elements are transferred into cylindrical equal area map projection and based on the transformed area elements Cartesian mass elements with the same height as that of the ellipsoidal mass elements are constructed. Using the close form solution of the Newton integral in terms of Cartesian coordinates the potential of the Cartesian mass elements are computed and compared with the same results based on the application of the ellipsoidal Newton integral over the ellipsoidal mass elements. The results of the numerical computations show that difference between computed gravitational potential of the ellipsoidal mass elements and Cartesian mass element in the cylindrical equal area map projection is of the order of 1.6 × 10-8m^2/s^2 for a mass element with the cross section size of 10 km × 10 km and the height of 1000 m. For a 1 km × 1 km mass element with the same height, this difference is less than 1.5 × 10-4 m^2}/s^2. The results of the numerical computations indicate that a new method for computing the terrain correction based on the closed form solution of the Newton integral in terms of Cartesian coordinates and with accuracy of ellipsoidal terrain correction has been achieved! In this way one can enjoy the simplicity of the solution of the Newton integral in terms of Cartesian coordinates and at the same time the accuracy of the ellipsoidal terrain correction, which is needed for the modern theory of geoid computations.

  5. Shear viscosity of binary mixtures: The Gay-Berne potential

    NASA Astrophysics Data System (ADS)

    Khordad, R.

    2012-05-01

    The Gay-Berne (GB) potential model is an interesting and useful model to study the real systems. Using the potential model, we intend to examine the thermodynamical properties of some anisotropic binary mixtures in two different phases, liquid and gas. For this purpose, we apply the integral equation method and solve numerically the Percus-Yevick (PY) integral equation. Then, we obtain the expansion coefficients of correlation functions to calculate the thermodynamical properties. Finally, we compare our results with the available experimental data [e.g., HFC-125 + propane, R-125/143a, methanol + toluene, benzene + methanol, cyclohexane + ethanol, benzene + ethanol, carbon tetrachloride + ethyl acetate, and methanol + ethanol]. The results show that the GB potential model is capable for predicting the thermodynamical properties of binary mixtures with acceptable accuracy.

  6. Improving multilevel Monte Carlo for stochastic differential equations with application to the Langevin equation

    PubMed Central

    Müller, Eike H.; Scheichl, Rob; Shardlow, Tony

    2015-01-01

    This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy. PMID:27547075

  7. Improving multilevel Monte Carlo for stochastic differential equations with application to the Langevin equation.

    PubMed

    Müller, Eike H; Scheichl, Rob; Shardlow, Tony

    2015-04-08

    This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy.

  8. High Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Lanteri, S.; Maman, N.; Piperno, S.; Gumaste, U.

    1994-01-01

    In order to predict the dynamic response of a flexible structure in a fluid flow, the equations of motion of the structure and the fluid must be solved simultaneously. In this paper, we present several partitioned procedures for time-integrating this focus coupled problem and discuss their merits in terms of accuracy, stability, heterogeneous computing, I/O transfers, subcycling, and parallel processing. All theoretical results are derived for a one-dimensional piston model problem with a compressible flow, because the complete three-dimensional aeroelastic problem is difficult to analyze mathematically. However, the insight gained from the analysis of the coupled piston problem and the conclusions drawn from its numerical investigation are confirmed with the numerical simulation of the two-dimensional transient aeroelastic response of a flexible panel in a transonic nonlinear Euler flow regime.

  9. Automated generation of influence functions for planar crack problems

    NASA Technical Reports Server (NTRS)

    Sire, Robert A.; Harris, David O.; Eason, Ernest D.

    1989-01-01

    A numerical procedure for the generation of influence functions for Mode I planar problems is described. The resulting influence functions are in a form for convenient evaluation of stress-intensity factors for complex stress distributions. Crack surface displacements are obtained by a least-squares solution of the Williams eigenfunction expansion for displacements in a cracked body. Discrete values of the influence function, evaluated using the crack surface displacements, are curve fit using an assumed functional form. The assumed functional form includes appropriate limit-behavior terms for very deep and very shallow cracks. Continuous representation of the influence function provides a convenient means for evaluating stress-intensity factors for arbitrary stress distributions by numerical integration. The procedure is demonstrated for an edge-cracked strip and a radially cracked disk. Comparisons with available published results demonstrate the accuracy of the procedure.

  10. Inferring physical properties of galaxies from their emission-line spectra

    NASA Astrophysics Data System (ADS)

    Ucci, G.; Ferrara, A.; Gallerani, S.; Pallottini, A.

    2017-02-01

    We present a new approach based on Supervised Machine Learning algorithms to infer key physical properties of galaxies (density, metallicity, column density and ionization parameter) from their emission-line spectra. We introduce a numerical code (called GAME, GAlaxy Machine learning for Emission lines) implementing this method and test it extensively. GAME delivers excellent predictive performances, especially for estimates of metallicity and column densities. We compare GAME with the most widely used diagnostics (e.g. R23, [N II] λ6584/Hα indicators) showing that it provides much better accuracy and wider applicability range. GAME is particularly suitable for use in combination with Integral Field Unit spectroscopy, both for rest-frame optical/UV nebular lines and far-infrared/sub-millimeter lines arising from photodissociation regions. Finally, GAME can also be applied to the analysis of synthetic galaxy maps built from numerical simulations.

  11. A new frequency approach for light flicker evaluation in electric power systems

    NASA Astrophysics Data System (ADS)

    Feola, Luigi; Langella, Roberto; Testa, Alfredo

    2015-12-01

    In this paper, a new analytical estimator for light flicker in frequency domain, which is able to take into account also the frequency components neglected by the classical methods proposed in literature, is proposed. The analytical solutions proposed apply for any generic stationary signal affected by interharmonic distortion. The light flicker analytical estimator proposed is applied to numerous numerical case studies with the goal of showing i) the correctness and the improvements of the analytical approach proposed with respect to the other methods proposed in literature and ii) the accuracy of the results compared to those obtained by means of the classical International Electrotechnical Commission (IEC) flickermeter. The usefulness of the proposed analytical approach is that it can be included in signal processing tools for interharmonic penetration studies for the integration of renewable energy sources in future smart grids.

  12. A split finite element algorithm for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1979-01-01

    An accurate and efficient numerical solution algorithm is established for solution of the high Reynolds number limit of the Navier-Stokes equations governing the multidimensional flow of a compressible essentially inviscid fluid. Finite element interpolation theory is used within a dissipative formulation established using Galerkin criteria within the Method of Weighted Residuals. An implicit iterative solution algorithm is developed, employing tensor product bases within a fractional steps integration procedure, that significantly enhances solution economy concurrent with sharply reduced computer hardware demands. The algorithm is evaluated for resolution of steep field gradients and coarse grid accuracy using both linear and quadratic tensor product interpolation bases. Numerical solutions for linear and nonlinear, one, two and three dimensional examples confirm and extend the linearized theoretical analyses, and results are compared to competitive finite difference derived algorithms.

  13. A 3-D Finite-Volume Non-hydrostatic Icosahedral Model (NIM)

    NASA Astrophysics Data System (ADS)

    Lee, Jin

    2014-05-01

    The Nonhydrostatic Icosahedral Model (NIM) formulates the latest numerical innovation of the three-dimensional finite-volume control volume on the quasi-uniform icosahedral grid suitable for ultra-high resolution simulations. NIM's modeling goal is to improve numerical accuracy for weather and climate simulations as well as to utilize the state-of-art computing architecture such as massive parallel CPUs and GPUs to deliver routine high-resolution forecasts in timely manner. NIM dynamic corel innovations include: * A local coordinate system remapped spherical surface to plane for numerical accuracy (Lee and MacDonald, 2009), * Grid points in a table-driven horizontal loop that allow any horizontal point sequence (A.E. MacDonald, et al., 2010), * Flux-Corrected Transport formulated on finite-volume operators to maintain conservative positive definite transport (J.-L, Lee, ET. Al., 2010), *Icosahedral grid optimization (Wang and Lee, 2011), * All differentials evaluated as three-dimensional finite-volume integrals around the control volume. The three-dimensional finite-volume solver in NIM is designed to improve pressure gradient calculation and orographic precipitation over complex terrain. NIM dynamical core has been successfully verified with various non-hydrostatic benchmark test cases such as internal gravity wave, and mountain waves in Dynamical Cores Model Inter-comparisons Projects (DCMIP). Physical parameterizations suitable for NWP are incorporated into NIM dynamical core and successfully tested with multimonth aqua-planet simulations. Recently, NIM has started real data simulations using GFS initial conditions. Results from the idealized tests as well as real-data simulations will be shown in the conference.

  14. Variable Step Integration Coupled with the Method of Characteristics Solution for Water-Hammer Analysis, A Case Study

    NASA Technical Reports Server (NTRS)

    Turpin, Jason B.

    2004-01-01

    One-dimensional water-hammer modeling involves the solution of two coupled non-linear hyperbolic partial differential equations (PDEs). These equations result from applying the principles of conservation of mass and momentum to flow through a pipe, and usually the assumption that the speed at which pressure waves propagate through the pipe is constant. In order to solve these equations for the interested quantities (i.e. pressures and flow rates), they must first be converted to a system of ordinary differential equations (ODEs) by either approximating the spatial derivative terms with numerical techniques or using the Method of Characteristics (MOC). The MOC approach is ideal in that no numerical approximation errors are introduced in converting the original system of PDEs into an equivalent system of ODEs. Unfortunately this resulting system of ODEs is bound by a time step constraint so that when integrating the equations the solution can only be obtained at fixed time intervals. If the fluid system to be modeled also contains dynamic components (i.e. components that are best modeled by a system of ODEs), it may be necessary to take extremely small time steps during certain points of the model simulation in order to achieve stability and/or accuracy in the solution. Coupled together, the fixed time step constraint invoked by the MOC, and the occasional need for extremely small time steps in order to obtain stability and/or accuracy, can greatly increase simulation run times. As one solution to this problem, a method for combining variable step integration (VSI) algorithms with the MOC was developed for modeling water-hammer in systems with highly dynamic components. A case study is presented in which reverse flow through a dual-flapper check valve introduces a water-hammer event. The predicted pressure responses upstream of the check-valve are compared with test data.

  15. Aerodynamic influence coefficient method using singularity splines.

    NASA Technical Reports Server (NTRS)

    Mercer, J. E.; Weber, J. A.; Lesferd, E. P.

    1973-01-01

    A new numerical formulation with computed results, is presented. This formulation combines the adaptability to complex shapes offered by paneling schemes with the smoothness and accuracy of the loading function methods. The formulation employs a continuous distribution of singularity strength over a set of panels on a paneled wing. The basic distributions are independent, and each satisfies all of the continuity conditions required of the final solution. These distributions are overlapped both spanwise and chordwise (termed 'spline'). Boundary conditions are satisfied in a least square error sense over the surface using a finite summing technique to approximate the integral.

  16. Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel; Wang, Z. J.

    2004-01-01

    A new, high-order, conservative, and efficient method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. A discussion on the Discontinuous Spectral Difference (SD) Method, locations of the unknowns and flux points and numerical results are also presented.

  17. A New Axi-Symmetric Element for Thin Walled Structures

    NASA Astrophysics Data System (ADS)

    Cardoso, Rui P. R.; Yoon, Jeong Whan; Dick, Robert E.

    2010-06-01

    A new axi-symmetric finite element for sheet metal forming applications is presented in this work. It uses the solid-shell element's concept with only a single element layer and multiple integration points along the thickness direction. The cross section of the element is composed of four nodes with two degrees of freedom each. The proposed formulation overcomes major locking pathologies including transverse shear locking, Poisson's locking and volumetric locking. Some examples are shown to demonstrate the performance and accuracy of the proposed element with special focus on the numerical simulations for the beverage can industry.

  18. Solution methods for one-dimensional viscoelastic problems

    NASA Technical Reports Server (NTRS)

    Stubstad, John M.; Simitses, George J.

    1987-01-01

    A recently developed differential methodology for solution of one-dimensional nonlinear viscoelastic problems is presented. Using the example of an eccentrically loaded cantilever beam-column, the results from the differential formulation are compared to results generated using a previously published integral solution technique. It is shown that the results obtained from these distinct methodologies exhibit a surprisingly high degree of correlation with one another. A discussion of the various factors affecting the numerical accuracy and rate of convergence of these two procedures is also included. Finally, the influences of some 'higher order' effects, such as straining along the centroidal axis are discussed.

  19. Accuracy of an unstructured-grid upwind-Euler algorithm for the ONERA M6 wing

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    1991-01-01

    Improved algorithms for the solution of the three-dimensional, time-dependent Euler equations are presented for aerodynamic analysis involving unstructured dynamic meshes. The improvements have been developed recently to the spatial and temporal discretizations used by unstructured-grid flow solvers. The spatial discretization involves a flux-split approach that is naturally dissipative and captures shock waves sharply with at most one grid point within the shock structure. The temporal discretization involves either an explicit time-integration scheme using a multistage Runge-Kutta procedure or an implicit time-integration scheme using a Gauss-Seidel relaxation procedure, which is computationally efficient for either steady or unsteady flow problems. With the implicit Gauss-Seidel procedure, very large time steps may be used for rapid convergence to steady state, and the step size for unsteady cases may be selected for temporal accuracy rather than for numerical stability. Steady flow results are presented for both the NACA 0012 airfoil and the Office National d'Etudes et de Recherches Aerospatiales M6 wing to demonstrate applications of the new Euler solvers. The paper presents a description of the Euler solvers along with results and comparisons that assess the capability.

  20. Extended analytical formulas for the perturbed Keplerian motion under a constant control acceleration

    NASA Astrophysics Data System (ADS)

    Zuiani, Federico; Vasile, Massimiliano

    2015-03-01

    This paper presents a set of analytical formulae for the perturbed Keplerian motion of a spacecraft under the effect of a constant control acceleration. The proposed set of formulae can treat control accelerations that are fixed in either a rotating or inertial reference frame. Moreover, the contribution of the zonal harmonic is included in the analytical formulae. It will be shown that the proposed analytical theory allows for the fast computation of long, multi-revolution spirals while maintaining good accuracy. The combined effect of different perturbations and of the shadow regions due to solar eclipse is also included. Furthermore, a simplified control parameterisation is introduced to optimise thrusting patterns with two thrust arcs and two cost arcs per revolution. This simple parameterisation is shown to ensure enough flexibility to describe complex low thrust spirals. The accuracy and speed of the proposed analytical formulae are compared against a full numerical integration with different integration schemes. An averaging technique is then proposed as an application of the analytical formulae. Finally, the paper presents an example of design of an optimal low-thrust spiral to transfer a spacecraft from an elliptical to a circular orbit around the Earth.

  1. Real-time simulation of contact and cutting of heterogeneous soft-tissues.

    PubMed

    Courtecuisse, Hadrien; Allard, Jérémie; Kerfriden, Pierre; Bordas, Stéphane P A; Cotin, Stéphane; Duriez, Christian

    2014-02-01

    This paper presents a numerical method for interactive (real-time) simulations, which considerably improves the accuracy of the response of heterogeneous soft-tissue models undergoing contact, cutting and other topological changes. We provide an integrated methodology able to deal both with the ill-conditioning issues associated with material heterogeneities, contact boundary conditions which are one of the main sources of inaccuracies, and cutting which is one of the most challenging issues in interactive simulations. Our approach is based on an implicit time integration of a non-linear finite element model. To enable real-time computations, we propose a new preconditioning technique, based on an asynchronous update at low frequency. The preconditioner is not only used to improve the computation of the deformation of the tissues, but also to simulate the contact response of homogeneous and heterogeneous bodies with the same accuracy. We also address the problem of cutting the heterogeneous structures and propose a method to update the preconditioner according to the topological modifications. Finally, we apply our approach to three challenging demonstrators: (i) a simulation of cataract surgery (ii) a simulation of laparoscopic hepatectomy (iii) a brain tumor surgery. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Analytic theory of orbit contraction

    NASA Technical Reports Server (NTRS)

    Vinh, N. X.; Longuski, J. M.; Busemann, A.; Culp, R. D.

    1977-01-01

    The motion of a satellite in orbit, subject to atmospheric force and the motion of a reentry vehicle are governed by gravitational and aerodynamic forces. This suggests the derivation of a uniform set of equations applicable to both cases. For the case of satellite motion, by a proper transformation and by the method of averaging, a technique appropriate for long duration flight, the classical nonlinear differential equation describing the contraction of the major axis is derived. A rigorous analytic solution is used to integrate this equation with a high degree of accuracy, using Poincare's method of small parameters and Lagrange's expansion to explicitly express the major axis as a function of the eccentricity. The solution is uniformly valid for moderate and small eccentricities. For highly eccentric orbits, the asymptotic equation is derived directly from the general equation. Numerical solutions were generated to display the accuracy of the analytic theory.

  3. Linear scaling computation of the Fock matrix. II. Rigorous bounds on exchange integrals and incremental Fock build

    NASA Astrophysics Data System (ADS)

    Schwegler, Eric; Challacombe, Matt; Head-Gordon, Martin

    1997-06-01

    A new linear scaling method for computation of the Cartesian Gaussian-based Hartree-Fock exchange matrix is described, which employs a method numerically equivalent to standard direct SCF, and which does not enforce locality of the density matrix. With a previously described method for computing the Coulomb matrix [J. Chem. Phys. 106, 5526 (1997)], linear scaling incremental Fock builds are demonstrated for the first time. Microhartree accuracy and linear scaling are achieved for restricted Hartree-Fock calculations on sequences of water clusters and polyglycine α-helices with the 3-21G and 6-31G basis sets. Eightfold speedups are found relative to our previous method. For systems with a small ionization potential, such as graphitic sheets, the method naturally reverts to the expected quadratic behavior. Also, benchmark 3-21G calculations attaining microhartree accuracy are reported for the P53 tetramerization monomer involving 698 atoms and 3836 basis functions.

  4. Accuracy of Three Dimensional Solid Finite Elements

    NASA Technical Reports Server (NTRS)

    Case, W. R.; Vandegrift, R. E.

    1984-01-01

    The results of a study to determine the accuracy of the three dimensional solid elements available in NASTRAN for predicting displacements is presented. Of particular interest in the study is determining how to effectively use solid elements in analyzing thick optical mirrors, as might exist in a large telescope. Surface deformations due to thermal and gravity loading can be significant contributors to the determination of the overall optical quality of a telescope. The study investigates most of the solid elements currently available in either COSMIC or MSC NASTRAN. Error bounds as a function of mesh refinement and element aspect ratios are addressed. It is shown that the MSC solid elements are, in general, more accurate than their COSMIC NASTRAN counterparts due to the specialized numerical integration used. In addition, the MSC elements appear to be more economical to use on the DEC VAX 11/780 computer.

  5. Tempest: Mesoscale test case suite results and the effect of order-of-accuracy on pressure gradient force errors

    NASA Astrophysics Data System (ADS)

    Guerra, J. E.; Ullrich, P. A.

    2014-12-01

    Tempest is a new non-hydrostatic atmospheric modeling framework that allows for investigation and intercomparison of high-order numerical methods. It is composed of a dynamical core based on a finite-element formulation of arbitrary order operating on cubed-sphere and Cartesian meshes with topography. The underlying technology is briefly discussed, including a novel Hybrid Finite Element Method (HFEM) vertical coordinate coupled with high-order Implicit/Explicit (IMEX) time integration to control vertically propagating sound waves. Here, we show results from a suite of Mesoscale testing cases from the literature that demonstrate the accuracy, performance, and properties of Tempest on regular Cartesian meshes. The test cases include wave propagation behavior, Kelvin-Helmholtz instabilities, and flow interaction with topography. Comparisons are made to existing results highlighting improvements made in resolving atmospheric dynamics in the vertical direction where many existing methods are deficient.

  6. A unified radiative magnetohydrodynamics code for lightning-like discharge simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Qiang, E-mail: cq0405@126.com; Chen, Bin, E-mail: emcchen@163.com; Xiong, Run

    2014-03-15

    A two-dimensional Eulerian finite difference code is developed for solving the non-ideal magnetohydrodynamic (MHD) equations including the effects of self-consistent magnetic field, thermal conduction, resistivity, gravity, and radiation transfer, which when combined with specified pulse current models and plasma equations of state, can be used as a unified lightning return stroke solver. The differential equations are written in the covariant form in the cylindrical geometry and kept in the conservative form which enables some high-accuracy shock capturing schemes to be equipped in the lightning channel configuration naturally. In this code, the 5-order weighted essentially non-oscillatory scheme combined with Lax-Friedrichs fluxmore » splitting method is introduced for computing the convection terms of the MHD equations. The 3-order total variation diminishing Runge-Kutta integral operator is also equipped to keep the time-space accuracy of consistency. The numerical algorithms for non-ideal terms, e.g., artificial viscosity, resistivity, and thermal conduction, are introduced in the code via operator splitting method. This code assumes the radiation is in local thermodynamic equilibrium with plasma components and the flux limited diffusion algorithm with grey opacities is implemented for computing the radiation transfer. The transport coefficients and equation of state in this code are obtained from detailed particle population distribution calculation, which makes the numerical model is self-consistent. This code is systematically validated via the Sedov blast solutions and then used for lightning return stroke simulations with the peak current being 20 kA, 30 kA, and 40 kA, respectively. The results show that this numerical model consistent with observations and previous numerical results. The population distribution evolution and energy conservation problems are also discussed.« less

  7. Ellipsoidal terrain correction based on multi-cylindrical equal-area map projection of the reference ellipsoid

    NASA Astrophysics Data System (ADS)

    Ardalan, A. A.; Safari, A.

    2004-09-01

    An operational algorithm for computation of terrain correction (or local gravity field modeling) based on application of closed-form solution of the Newton integral in terms of Cartesian coordinates in multi-cylindrical equal-area map projection of the reference ellipsoid is presented. Multi-cylindrical equal-area map projection of the reference ellipsoid has been derived and is described in detail for the first time. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid are selected and the gravitational potential and vector of gravitational intensity (i.e. gravitational acceleration) of the mass elements are computed via numerical solution of the Newton integral in terms of geodetic coordinates {λ,ϕ,h}. Four base- edge points of the ellipsoidal mass elements are transformed into a multi-cylindrical equal-area map projection surface to build Cartesian mass elements by associating the height of the corresponding ellipsoidal mass elements to the transformed area elements. Using the closed-form solution of the Newton integral in terms of Cartesian coordinates, the gravitational potential and vector of gravitational intensity of the transformed Cartesian mass elements are computed and compared with those of the numerical solution of the Newton integral for the ellipsoidal mass elements in terms of geodetic coordinates. Numerical tests indicate that the difference between the two computations, i.e. numerical solution of the Newton integral for ellipsoidal mass elements in terms of geodetic coordinates and closed-form solution of the Newton integral in terms of Cartesian coordinates, in a multi-cylindrical equal-area map projection, is less than 1.6×10-8 m2/s2 for a mass element with a cross section area of 10×10 m and a height of 10,000 m. For a mass element with a cross section area of 1×1 km and a height of 10,000 m the difference is less than 1.5×10-4m2/s2. Since 1.5× 10-4 m2/s2 is equivalent to 1.5×10-5m in the vertical direction, it can be concluded that a method for terrain correction (or local gravity field modeling) based on closed-form solution of the Newton integral in terms of Cartesian coordinates of a multi-cylindrical equal-area map projection of the reference ellipsoid has been developed which has the accuracy of terrain correction (or local gravity field modeling) based on the Newton integral in terms of ellipsoidal coordinates.

  8. On Richardson extrapolation for low-dissipation low-dispersion diagonally implicit Runge-Kutta schemes

    NASA Astrophysics Data System (ADS)

    Havasi, Ágnes; Kazemi, Ehsan

    2018-04-01

    In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.

  9. Monte-Carlo Simulation for Accuracy Assessment of a Single Camera Navigation System

    NASA Astrophysics Data System (ADS)

    Bethmann, F.; Luhmann, T.

    2012-07-01

    The paper describes a simulation-based optimization of an optical tracking system that is used as a 6DOF navigation system for neurosurgery. Compared to classical system used in clinical navigation, the presented system has two unique properties: firstly, the system will be miniaturized and integrated into an operating microscope for neurosurgery; secondly, due to miniaturization a single camera approach has been designed. Single camera techniques for 6DOF measurements show a special sensitivity against weak geometric configurations between camera and object. In addition, the achievable accuracy potential depends significantly on the geometric properties of the tracked objects (locators). Besides quality and stability of the targets used on the locator, their geometric configuration is of major importance. In the following the development and investigation of a simulation program is presented which allows for the assessment and optimization of the system with respect to accuracy. Different system parameters can be altered as well as different scenarios indicating the operational use of the system. Measurement deviations are estimated based on the Monte-Carlo method. Practical measurements validate the correctness of the numerical simulation results.

  10. A modified adjoint-based grid adaptation and error correction method for unstructured grid

    NASA Astrophysics Data System (ADS)

    Cui, Pengcheng; Li, Bin; Tang, Jing; Chen, Jiangtao; Deng, Youqi

    2018-05-01

    Grid adaptation is an important strategy to improve the accuracy of output functions (e.g. drag, lift, etc.) in computational fluid dynamics (CFD) analysis and design applications. This paper presents a modified robust grid adaptation and error correction method for reducing simulation errors in integral outputs. The procedure is based on discrete adjoint optimization theory in which the estimated global error of output functions can be directly related to the local residual error. According to this relationship, local residual error contribution can be used as an indicator in a grid adaptation strategy designed to generate refined grids for accurately estimating the output functions. This grid adaptation and error correction method is applied to subsonic and supersonic simulations around three-dimensional configurations. Numerical results demonstrate that the sensitive grids to output functions are detected and refined after grid adaptation, and the accuracy of output functions is obviously improved after error correction. The proposed grid adaptation and error correction method is shown to compare very favorably in terms of output accuracy and computational efficiency relative to the traditional featured-based grid adaptation.

  11. Low-Storage, Explicit Runge-Kutta Schemes for the Compressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Kennedy, Chistopher A.; Carpenter, Mark H.; Lewis, R. Michael

    1999-01-01

    The derivation of storage explicit Runge-Kutta (ERK) schemes has been performed in the context of integrating the compressible Navier-Stokes equations via direct numerical simulation. Optimization of ERK methods is done across the broad range of properties, such as stability and accuracy efficiency, linear and nonlinear stability, error control reliability, step change stability, and dissipation/dispersion accuracy, subject to varying degrees of memory economization. Following van der Houwen and Wray, 16 ERK pairs are presented using from two to five registers of memory per equation, per grid point and having accuracies from third- to fifth-order. Methods have been assessed using the differential equation testing code DETEST, and with the 1D wave equation. Two of the methods have been applied to the DNS of a compressible jet as well as methane-air and hydrogen-air flames. Derived 3(2) and 4(3) pairs are competitive with existing full-storage methods. Although a substantial efficiency penalty accompanies use of two- and three-register, fifth-order methods, the best contemporary full-storage methods can be pearl), matched while still saving two to three registers of memory.

  12. Comparison of line-peak and line-scanning excitation in two-color laser-induced-fluorescence thermometry of OH.

    PubMed

    Kostka, Stanislav; Roy, Sukesh; Lakusta, Patrick J; Meyer, Terrence R; Renfro, Michael W; Gord, James R; Branam, Richard

    2009-11-10

    Two-line laser-induced-fluorescence (LIF) thermometry is commonly employed to generate instantaneous planar maps of temperature in unsteady flames. The use of line scanning to extract the ratio of integrated intensities is less common because it precludes instantaneous measurements. Recent advances in the energy output of high-speed, ultraviolet, optical parameter oscillators have made possible the rapid scanning of molecular rovibrational transitions and, hence, the potential to extract information on gas-phase temperatures. In the current study, two-line OH LIF thermometry is performed in a well-calibrated reacting flow for the purpose of comparing the relative accuracy of various line-pair selections from the literature and quantifying the differences between peak-intensity and spectrally integrated line ratios. Investigated are the effects of collisional quenching, laser absorption, and the integration width for partial scanning of closely spaced lines on the measured temperatures. Data from excitation scans are compared with theoretical line shapes, and experimentally derived temperatures are compared with numerical predictions that were previously validated using coherent anti-Stokes-Raman scattering. Ratios of four pairs of transitions in the A2Sigma+<--X2Pi (1,0) band of OH are collected in an atmospheric-pressure, near-adiabatic hydrogen-air flame over a wide range of equivalence ratios--from 0.4 to 1.4. It is observed that measured temperatures based on the ratio of Q1(14)/Q1(5) transition lines result in the best accuracy and that line scanning improves the measurement accuracy by as much as threefold at low-equivalence-ratio, low-temperature conditions. These results provide a comprehensive analysis of the procedures required to ensure accurate two-line LIF measurements in reacting flows over a wide range of conditions.

  13. Use of multivariable asymptotic expansions in a satellite theory

    NASA Technical Reports Server (NTRS)

    Dallas, S. S.

    1973-01-01

    Initial conditions and perturbative force of satellite are restricted to yield motion of equatorial satellite about oblate body. In this manner, exact analytic solution exists and can be used as standard of comparison in numerical accuracy comparisons. Detailed numerical accuracy studies of uniformly valid asymptotic expansions were made.

  14. Formulation and Implementation of Nonlinear Integral Equations to Model Neural Dynamics Within the Vertebrate Retina.

    PubMed

    Eshraghian, Jason K; Baek, Seungbum; Kim, Jun-Ho; Iannella, Nicolangelo; Cho, Kyoungrok; Goo, Yong Sook; Iu, Herbert H C; Kang, Sung-Mo; Eshraghian, Kamran

    2018-02-13

    Existing computational models of the retina often compromise between the biophysical accuracy and a hardware-adaptable methodology of implementation. When compared to the current modes of vision restoration, algorithmic models often contain a greater correlation between stimuli and the affected neural network, but lack physical hardware practicality. Thus, if the present processing methods are adapted to complement very-large-scale circuit design techniques, it is anticipated that it will engender a more feasible approach to the physical construction of the artificial retina. The computational model presented in this research serves to provide a fast and accurate predictive model of the retina, a deeper understanding of neural responses to visual stimulation, and an architecture that can realistically be transformed into a hardware device. Traditionally, implicit (or semi-implicit) ordinary differential equations (OES) have been used for optimal speed and accuracy. We present a novel approach that requires the effective integration of different dynamical time scales within a unified framework of neural responses, where the rod, cone, amacrine, bipolar, and ganglion cells correspond to the implemented pathways. Furthermore, we show that adopting numerical integration can both accelerate retinal pathway simulations by more than 50% when compared with traditional ODE solvers in some cases, and prove to be a more realizable solution for the hardware implementation of predictive retinal models.

  15. A discontinuous Galerkin method for poroelastic wave propagation: The two-dimensional case

    NASA Astrophysics Data System (ADS)

    Dudley Ward, N. F.; Lähivaara, T.; Eveson, S.

    2017-12-01

    In this paper, we consider a high-order discontinuous Galerkin (DG) method for modelling wave propagation in coupled poroelastic-elastic media. The upwind numerical flux is derived as an exact solution for the Riemann problem including the poroelastic-elastic interface. Attenuation mechanisms in both Biot's low- and high-frequency regimes are considered. The current implementation supports non-uniform basis orders which can be used to control the numerical accuracy element by element. In the numerical examples, we study the convergence properties of the proposed DG scheme and provide experiments where the numerical accuracy of the scheme under consideration is compared to analytic and other numerical solutions.

  16. Accuracy Study of the Space-Time CE/SE Method for Computational Aeroacoustics Problems Involving Shock Waves

    NASA Technical Reports Server (NTRS)

    Wang, Xiao Yen; Chang, Sin-Chung; Jorgenson, Philip C. E.

    1999-01-01

    The space-time conservation element and solution element(CE/SE) method is used to study the sound-shock interaction problem. The order of accuracy of numerical schemes is investigated. The linear model problem.govemed by the 1-D scalar convection equation, sound-shock interaction problem governed by the 1-D Euler equations, and the 1-D shock-tube problem which involves moving shock waves and contact surfaces are solved to investigate the order of accuracy of numerical schemes. It is concluded that the accuracy of the CE/SE numerical scheme with designed 2nd-order accuracy becomes 1st order when a moving shock wave exists. However, the absolute error in the CE/SE solution downstream of the shock wave is on the same order as that obtained using a fourth-order accurate essentially nonoscillatory (ENO) scheme. No special techniques are used for either high-frequency low-amplitude waves or shock waves.

  17. A note on the accuracy of spectral method applied to nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang; Wong, Peter S.

    1994-01-01

    Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.

  18. Streamline integration as a method for two-dimensional elliptic grid generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiesenberger, M., E-mail: Matthias.Wiesenberger@uibk.ac.at; Held, M.; Einkemmer, L.

    We propose a new numerical algorithm to construct a structured numerical elliptic grid of a doubly connected domain. Our method is applicable to domains with boundaries defined by two contour lines of a two-dimensional function. Furthermore, we can adapt any analytically given boundary aligned structured grid, which specifically includes polar and Cartesian grids. The resulting coordinate lines are orthogonal to the boundary. Grid points as well as the elements of the Jacobian matrix can be computed efficiently and up to machine precision. In the simplest case we construct conformal grids, yet with the help of weight functions and monitor metricsmore » we can control the distribution of cells across the domain. Our algorithm is parallelizable and easy to implement with elementary numerical methods. We assess the quality of grids by considering both the distribution of cell sizes and the accuracy of the solution to elliptic problems. Among the tested grids these key properties are best fulfilled by the grid constructed with the monitor metric approach. - Graphical abstract: - Highlights: • Construct structured, elliptic numerical grids with elementary numerical methods. • Align coordinate lines with or make them orthogonal to the domain boundary. • Compute grid points and metric elements up to machine precision. • Control cell distribution by adaption functions or monitor metrics.« less

  19. Numerical simulation of conservation laws

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; To, Wai-Ming

    1992-01-01

    A new numerical framework for solving conservation laws is being developed. This new approach differs substantially from the well established methods, i.e., finite difference, finite volume, finite element and spectral methods, in both concept and methodology. The key features of the current scheme include: (1) direct discretization of the integral forms of conservation laws, (2) treating space and time on the same footing, (3) flux conservation in space and time, and (4) unified treatment of the convection and diffusion fluxes. The model equation considered in the initial study is the standard one dimensional unsteady constant-coefficient convection-diffusion equation. In a stability study, it is shown that the principal and spurious amplification factors of the current scheme, respectively, are structurally similar to those of the leapfrog/DuFort-Frankel scheme. As a result, the current scheme has no numerical diffusion in the special case of pure convection and is unconditionally stable in the special case of pure diffusion. Assuming smooth initial data, it will be shown theoretically and numerically that, by using an easily determined optimal time step, the accuracy of the current scheme may reach a level which is several orders of magnitude higher than that of the MacCormack scheme, with virtually identical operation count.

  20. High-fidelity large eddy simulation for supersonic jet noise prediction

    NASA Astrophysics Data System (ADS)

    Aikens, Kurt M.

    The problem of intense sound radiation from supersonic jets is a concern for both civil and military applications. As a result, many experimental and computational efforts are focused at evaluating possible noise suppression techniques. Large-eddy simulation (LES) is utilized in many computational studies to simulate the turbulent jet flowfield. Integral methods such as the Ffowcs Williams-Hawkings (FWH) method are then used for propagation of the sound waves to the farfield. Improving the accuracy of this two-step methodology and evaluating beveled converging-diverging nozzles for noise suppression are the main tasks of this work. First, a series of numerical experiments are undertaken to ensure adequate numerical accuracy of the FWH methodology. This includes an analysis of different treatments for the downstream integration surface: with or without including an end-cap, averaging over multiple end-caps, and including an approximate surface integral correction term. Secondly, shock-capturing methods based on characteristic filtering and adaptive spatial filtering are used to extend a highly-parallelizable multiblock subsonic LES code to enable simulations of supersonic jets. The code is based on high-order numerical methods for accurate prediction of the acoustic sources and propagation of the sound waves. Furthermore, this new code is more efficient than the legacy version, allows cylindrical multiblock topologies, and is capable of simulating nozzles with resolved turbulent boundary layers when coupled with an approximate turbulent inflow boundary condition. Even though such wall-resolved simulations are more physically accurate, their expense is often prohibitive. To make simulations more economical, a wall model is developed and implemented. The wall modeling methodology is validated for turbulent quasi-incompressible and compressible zero pressure gradient flat plate boundary layers, and for subsonic and supersonic jets. The supersonic code additions and the wall model treatment are then utilized to simulate military-style nozzles with and without beveling of the nozzle exit plane. Experiments of beveled converging-diverging nozzles have found reduced noise levels for some observer locations. Predicting the noise for these geometries provides a good initial test of the overall methodology for a more complex nozzle. The jet flowfield and acoustic data are analyzed and compared to similar experiments and excellent agreement is found. Potential areas of improvement are discussed for future research.

  1. Using GPS RO L1 data for calibration of the atmospheric path delay model for data reduction of the satellite altimetery observations.

    NASA Astrophysics Data System (ADS)

    Petrov, L.

    2017-12-01

    Processing satellite altimetry data requires the computation of path delayin the neutral atmosphere that is used for correcting ranges. The path delayis computed using numerical weather models and the accuracy of its computationdepends on the accuracy of numerical weather models. Accuracy of numerical modelsof numerical weather models over Antarctica and Greenland where there is a very sparse network of ground stations, is not well known. I used the dataset of GPS RO L1 data, computed predicted path delay for ROobservations using the numerical whether model GEOS-FPIT, formed the differences with observed path delay and used these differences for computationof the corrections to the a priori refractivity profile. These profiles wereused for computing corrections to the a priori zenith path delay. The systematic patter of these corrections are used for de-biasing of the the satellite altimetry results and for characterization of the systematic errorscaused by mismodeling atmosphere.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balsa Terzic, Gabriele Bassi

    In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methodsmore » are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.« less

  3. Modeling of Turbulent Natural Convection in Enclosed Tall Cavities

    NASA Astrophysics Data System (ADS)

    Goloviznin, V. M.; Korotkin, I. A.; Finogenov, S. A.

    2017-12-01

    It was shown in our previous work (J. Appl. Mech. Tech. Phys 57 (7), 1159-1171 (2016)) that the eddy-resolving parameter-free CABARET scheme as applied to two-and three-dimensional de Vahl Davis benchmark tests (thermal convection in a square cavity) yields numerical results on coarse (20 × 20 and 20 × 20 × 20) grids that agree surprisingly well with experimental data and highly accurate computations for Rayleigh numbers of up to 1014. In the present paper, the sensitivity of this phenomenon to the cavity shape (varying from cubical to highly elongated) is analyzed. Box-shaped computational domains with aspect ratios of 1: 4, 1: 10, and 1: 28.6 are considered. The results produced by the CABARET scheme are compared with experimental data (aspect ratio of 1: 28.6), DNS results (aspect ratio of 1: 4), and an empirical formula (aspect ratio of 1: 10). In all the cases, the CABARET-based integral parameters of the cavity flow agree well with the other authors' results. Notably coarse grids with mesh refinement toward the walls are used in the CABARET calculations. It is shown that acceptable numerical accuracy on extremely coarse grids is achieved for an aspect ratio of up to 1: 10. For higher aspect ratios, the number of grid cells required for achieving prescribed accuracy grows significantly.

  4. An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts

    NASA Astrophysics Data System (ADS)

    Yan, Kun; Cheng, Gengdong

    2018-03-01

    For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.

  5. On-the-fly Numerical Surface Integration for Finite-Difference Poisson-Boltzmann Methods.

    PubMed

    Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2011-11-01

    Most implicit solvation models require the definition of a molecular surface as the interface that separates the solute in atomic detail from the solvent approximated as a continuous medium. Commonly used surface definitions include the solvent accessible surface (SAS), the solvent excluded surface (SES), and the van der Waals surface. In this study, we present an efficient numerical algorithm to compute the SES and SAS areas to facilitate the applications of finite-difference Poisson-Boltzmann methods in biomolecular simulations. Different from previous numerical approaches, our algorithm is physics-inspired and intimately coupled to the finite-difference Poisson-Boltzmann methods to fully take advantage of its existing data structures. Our analysis shows that the algorithm can achieve very good agreement with the analytical method in the calculation of the SES and SAS areas. Specifically, in our comprehensive test of 1,555 molecules, the average unsigned relative error is 0.27% in the SES area calculations and 1.05% in the SAS area calculations at the grid spacing of 1/2Å. In addition, a systematic correction analysis can be used to improve the accuracy for the coarse-grid SES area calculations, with the average unsigned relative error in the SES areas reduced to 0.13%. These validation studies indicate that the proposed algorithm can be applied to biomolecules over a broad range of sizes and structures. Finally, the numerical algorithm can also be adapted to evaluate the surface integral of either a vector field or a scalar field defined on the molecular surface for additional solvation energetics and force calculations.

  6. Multiple-correction hybrid k-exact schemes for high-order compressible RANS-LES simulations on fully unstructured grids

    NASA Astrophysics Data System (ADS)

    Pont, Grégoire; Brenner, Pierre; Cinnella, Paola; Maugars, Bruno; Robinet, Jean-Christophe

    2017-12-01

    A Godunov's type unstructured finite volume method suitable for highly compressible turbulent scale-resolving simulations around complex geometries is constructed by using a successive correction technique. First, a family of k-exact Godunov schemes is developed by recursively correcting the truncation error of the piecewise polynomial representation of the primitive variables. The keystone of the proposed approach is a quasi-Green gradient operator which ensures consistency on general meshes. In addition, a high-order single-point quadrature formula, based on high-order approximations of the successive derivatives of the solution, is developed for flux integration along cell faces. The proposed family of schemes is compact in the algorithmic sense, since it only involves communications between direct neighbors of the mesh cells. The numerical properties of the schemes up to fifth-order are investigated, with focus on their resolvability in terms of number of mesh points required to resolve a given wavelength accurately. Afterwards, in the aim of achieving the best possible trade-off between accuracy, computational cost and robustness in view of industrial flow computations, we focus more specifically on the third-order accurate scheme of the family, and modify locally its numerical flux in order to reduce the amount of numerical dissipation in vortex-dominated regions. This is achieved by switching from the upwind scheme, mostly applied in highly compressible regions, to a fourth-order centered one in vortex-dominated regions. An analytical switch function based on the local grid Reynolds number is adopted in order to warrant numerical stability of the recentering process. Numerical applications demonstrate the accuracy and robustness of the proposed methodology for compressible scale-resolving computations. In particular, supersonic RANS/LES computations of the flow over a cavity are presented to show the capability of the scheme to predict flows with shocks, vortical structures and complex geometries.

  7. Runge-Kutta methods combined with compact difference schemes for the unsteady Euler equations

    NASA Technical Reports Server (NTRS)

    Yu, Sheng-Tao

    1992-01-01

    Recent development using compact difference schemes to solve the Navier-Stokes equations show spectral-like accuracy. A study was made of the numerical characteristics of various combinations of the Runge-Kutta (RK) methods and compact difference schemes to calculate the unsteady Euler equations. The accuracy of finite difference schemes is assessed based on the evaluations of dissipative error. The objectives are reducing the numerical damping and, at the same time, preserving numerical stability. While this approach has tremendous success solving steady flows, numerical characteristics of unsteady calculations remain largely unclear. For unsteady flows, in addition to the dissipative errors, phase velocity and harmonic content of the numerical results are of concern. As a result of the discretization procedure, the simulated unsteady flow motions actually propagate in a dispersive numerical medium. Consequently, the dispersion characteristics of the numerical schemes which relate the phase velocity and wave number may greatly impact the numerical accuracy. The aim is to assess the numerical accuracy of the simulated results. To this end, the Fourier analysis is to provide the dispersive correlations of various numerical schemes. First, a detailed investigation of the existing RK methods is carried out. A generalized form of an N-step RK method is derived. With this generalized form, the criteria are derived for the three and four-step RK methods to be third and fourth-order time accurate for the non-linear equations, e.g., flow equations. These criteria are then applied to commonly used RK methods such as Jameson's 3-step and 4-step schemes and Wray's algorithm to identify the accuracy of the methods. For the spatial discretization, compact difference schemes are presented. The schemes are formulated in the operator-type to render themselves suitable for the Fourier analyses. The performance of the numerical methods is shown by numerical examples. These examples are detailed. described. The third case is a two-dimensional simulation of a Lamb vortex in an uniform flow. This calculation provides a realistic assessment of various finite difference schemes in terms of the conservation of the vortex strength and the harmonic content after travelling a substantial distance. The numerical implementation of Giles' non-refelctive equations coupled with the characteristic equations as the boundary condition is discussed in detail. Finally, the single vortex calculation is extended to simulate vortex pairing. For the distance between two vortices less than a threshold value, numerical results show crisp resolution of the vortex merging.

  8. The Accuracy of Shock Capturing in Two Spatial Dimensions

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Casper, Jay H.

    1997-01-01

    An assessment of the accuracy of shock capturing schemes is made for two-dimensional steady flow around a cylindrical projectile. Both a linear fourth-order method and a nonlinear third-order method are used in this study. It is shown, contrary to conventional wisdom, that captured two-dimensional shocks are asymptotically first-order, regardless of the design accuracy of the numerical method. The practical implications of this finding are discussed in the context of the efficacy of high-order numerical methods for discontinuous flows.

  9. Servo-controlling structure of five-axis CNC system for real-time NURBS interpolating

    NASA Astrophysics Data System (ADS)

    Chen, Liangji; Guo, Guangsong; Li, Huiying

    2017-07-01

    NURBS (Non-Uniform Rational B-Spline) is widely used in CAD/CAM (Computer-Aided Design / Computer-Aided Manufacturing) to represent sculptured curves or surfaces. In this paper, we develop a 5-axis NURBS real-time interpolator and realize it in our developing CNC(Computer Numerical Control) system. At first, we use two NURBS curves to represent tool-tip and tool-axis path respectively. According to feedrate and Taylor series extension, servo-controlling signals of 5 axes are obtained for each interpolating cycle. Then, generation procedure of NC(Numerical Control) code with the presented method is introduced and the method how to integrate the interpolator into our developing CNC system is given. And also, the servo-controlling structure of the CNC system is introduced. Through the illustration, it has been indicated that the proposed method can enhance the machining accuracy and the spline interpolator is feasible for 5-axis CNC system.

  10. Blade loss transient dynamics analysis, volume 2. Task 2: Theoretical and analytical development. Task 3: Experimental verification

    NASA Technical Reports Server (NTRS)

    Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.

    1981-01-01

    The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.

  11. A two-dimensional numerical simulation of a supersonic, chemically reacting mixing layer

    NASA Technical Reports Server (NTRS)

    Drummond, J. Philip

    1988-01-01

    Research has been undertaken to achieve an improved understanding of physical phenomena present when a supersonic flow undergoes chemical reaction. A detailed understanding of supersonic reacting flows is necessary to successfully develop advanced propulsion systems now planned for use late in this century and beyond. In order to explore such flows, a study was begun to create appropriate physical models for describing supersonic combustion, and to develop accurate and efficient numerical techniques for solving the governing equations that result from these models. From this work, two computer programs were written to study reacting flows. Both programs were constructed to consider the multicomponent diffusion and convection of important chemical species, the finite rate reaction of these species, and the resulting interaction of the fluid mechanics and the chemistry. The first program employed a finite difference scheme for integrating the governing equations, whereas the second used a hybrid Chebyshev pseudospectral technique for improved accuracy.

  12. Acoustic coupled fluid-structure interactions using a unified fast multipole boundary element method.

    PubMed

    Wilkes, Daniel R; Duncan, Alec J

    2015-04-01

    This paper presents a numerical model for the acoustic coupled fluid-structure interaction (FSI) of a submerged finite elastic body using the fast multipole boundary element method (FMBEM). The Helmholtz and elastodynamic boundary integral equations (BIEs) are, respectively, employed to model the exterior fluid and interior solid domains, and the pressure and displacement unknowns are coupled between conforming meshes at the shared boundary interface to achieve the acoustic FSI. The low frequency FMBEM is applied to both BIEs to reduce the algorithmic complexity of the iterative solution from O(N(2)) to O(N(1.5)) operations per matrix-vector product for N boundary unknowns. Numerical examples are presented to demonstrate the algorithmic and memory complexity of the method, which are shown to be in good agreement with the theoretical estimates, while the solution accuracy is comparable to that achieved by a conventional finite element-boundary element FSI model.

  13. A novel equivalent definition of Caputo fractional derivative without singular kernel and superconvergent analysis

    NASA Astrophysics Data System (ADS)

    Liu, Zhengguang; Li, Xiaoli

    2018-05-01

    In this article, we present a new second-order finite difference discrete scheme for a fractal mobile/immobile transport model based on equivalent transformative Caputo formulation. The new transformative formulation takes the singular kernel away to make the integral calculation more efficient. Furthermore, this definition is also effective where α is a positive integer. Besides, the T-Caputo derivative also helps us to increase the convergence rate of the discretization of the α-order(0 < α < 1) Caputo derivative from O(τ2-α) to O(τ3-α), where τ is the time step. For numerical analysis, a Crank-Nicolson finite difference scheme to solve the fractal mobile/immobile transport model is introduced and analyzed. The unconditional stability and a priori estimates of the scheme are given rigorously. Moreover, the applicability and accuracy of the scheme are demonstrated by numerical experiments to support our theoretical analysis.

  14. Efficient field-theoretic simulation of polymer solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villet, Michael C.; Fredrickson, Glenn H., E-mail: ghf@mrl.ucsb.edu; Department of Materials, University of California, Santa Barbara, California 93106

    2014-12-14

    We present several developments that facilitate the efficient field-theoretic simulation of polymers by complex Langevin sampling. A regularization scheme using finite Gaussian excluded volume interactions is used to derive a polymer solution model that appears free of ultraviolet divergences and hence is well-suited for lattice-discretized field theoretic simulation. We show that such models can exhibit ultraviolet sensitivity, a numerical pathology that dramatically increases sampling error in the continuum lattice limit, and further show that this pathology can be eliminated by appropriate model reformulation by variable transformation. We present an exponential time differencing algorithm for integrating complex Langevin equations for fieldmore » theoretic simulation, and show that the algorithm exhibits excellent accuracy and stability properties for our regularized polymer model. These developments collectively enable substantially more efficient field-theoretic simulation of polymers, and illustrate the importance of simultaneously addressing analytical and numerical pathologies when implementing such computations.« less

  15. An equilibrium method for prediction of transverse shear stresses in a thick laminated plate

    NASA Technical Reports Server (NTRS)

    Chaudhuri, R. Z.

    1986-01-01

    First two equations of equilibrium are utilized to compute the transverse shear stress variation through thickness of a thick laminated plate after in-plane stresses have been computed using an assumed quadratic displacement triangular element based on transverse inextensibility and layerwise constant shear angle theory (LCST). Centroid of the triangle is the point of exceptional accuracy for transverse shear stresses. Numerical results indicate close agreement with elasticity theory. An interesting comparison between the present theory and that based on assumed stress hybrid finite element approach suggests that the latter does not satisfy the condition of free normal traction at the edge. Comparison with numerical results obtained by using constant shear angle theory suggests that LCST is close to the elasticity solution while the CST is closer to classical (CLT) solution. It is also demonstrated that the reduced integration gives faster convergence when the present theory is applied to a thin plate.

  16. Relationship between the spectral line based weighted-sum-of-gray-gases model and the full spectrum k-distribution model

    NASA Astrophysics Data System (ADS)

    Chu, Huaqiang; Liu, Fengshan; Consalvi, Jean-Louis

    2014-08-01

    The relationship between the spectral line based weighted-sum-of-gray-gases (SLW) model and the full-spectrum k-distribution (FSK) model in isothermal and homogeneous media is investigated in this paper. The SLW transfer equation can be derived from the FSK transfer equation expressed in the k-distribution function without approximation. It confirms that the SLW model is equivalent to the FSK model in the k-distribution function form. The numerical implementation of the SLW relies on a somewhat arbitrary discretization of the absorption cross section whereas the FSK model finds the spectrally integrated intensity by integration over the smoothly varying cumulative-k distribution function using a Gaussian quadrature scheme. The latter is therefore in general more efficient as a fewer number of gray gases is required to achieve a prescribed accuracy. Sample numerical calculations were conducted to demonstrate the different efficiency of these two methods. The FSK model is found more accurate than the SLW model in radiation transfer in H2O; however, the SLW model is more accurate in media containing CO2 as the only radiating gas due to its explicit treatment of ‘clear gas.’

  17. Bending and stretching finite element analysis of anisotropic viscoelastic composite plates

    NASA Technical Reports Server (NTRS)

    Hilton, Harry H.; Yi, Sung

    1990-01-01

    Finite element algorithms have been developed to analyze linear anisotropic viscoelastic plates, with or without holes, subjected to mechanical (bending, tension), temperature, and hygrothermal loadings. The analysis is based on Laplace transforms rather than direct time integrations in order to improve the accuracy of the results and save on extensive computational time and storage. The time dependent displacement fields in the transverse direction for the cross ply and angle ply laminates are calculated and the stacking sequence effects of the laminates are discussed in detail. Creep responses for the plates with or without a circular hole are also studied. The numerical results compare favorably with analytical solutions, i.e. within 1.8 percent for bending and 10(exp -3) 3 percent for tension. The tension results of the present method are compared with those using the direct time integration scheme.

  18. A Graph is Worth a Thousand Words: How Overconfidence and Graphical Disclosure of Numerical Information Influence Financial Analysts Accuracy on Decision Making

    PubMed Central

    Leite, Rodrigo Oliveira; de Aquino, André Carlos Busanelli

    2016-01-01

    Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts’ accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph) did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295) of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables) can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits) on financial analysts’ accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters. PMID:27508519

  19. A Graph is Worth a Thousand Words: How Overconfidence and Graphical Disclosure of Numerical Information Influence Financial Analysts Accuracy on Decision Making.

    PubMed

    Cardoso, Ricardo Lopes; Leite, Rodrigo Oliveira; de Aquino, André Carlos Busanelli

    2016-01-01

    Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts' accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph) did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295) of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables) can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits) on financial analysts' accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters.

  20. Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Martini, Michael C.

    2011-01-01

    A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.

  1. Semi-implicit integration factor methods on sparse grids for high-dimensional systems

    NASA Astrophysics Data System (ADS)

    Wang, Dongyong; Chen, Weitao; Nie, Qing

    2015-07-01

    Numerical methods for partial differential equations in high-dimensional spaces are often limited by the curse of dimensionality. Though the sparse grid technique, based on a one-dimensional hierarchical basis through tensor products, is popular for handling challenges such as those associated with spatial discretization, the stability conditions on time step size due to temporal discretization, such as those associated with high-order derivatives in space and stiff reactions, remain. Here, we incorporate the sparse grids with the implicit integration factor method (IIF) that is advantageous in terms of stability conditions for systems containing stiff reactions and diffusions. We combine IIF, in which the reaction is treated implicitly and the diffusion is treated explicitly and exactly, with various sparse grid techniques based on the finite element and finite difference methods and a multi-level combination approach. The overall method is found to be efficient in terms of both storage and computational time for solving a wide range of PDEs in high dimensions. In particular, the IIF with the sparse grid combination technique is flexible and effective in solving systems that may include cross-derivatives and non-constant diffusion coefficients. Extensive numerical simulations in both linear and nonlinear systems in high dimensions, along with applications of diffusive logistic equations and Fokker-Planck equations, demonstrate the accuracy, efficiency, and robustness of the new methods, indicating potential broad applications of the sparse grid-based integration factor method.

  2. Using CAD software to simulate PV energy yield - The case of product integrated photovoltaic operated under indoor solar irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reich, N.H.; van Sark, W.G.J.H.M.; Turkenburg, W.C.

    2010-08-15

    In this paper, we show that photovoltaic (PV) energy yields can be simulated using standard rendering and ray-tracing features of Computer Aided Design (CAD) software. To this end, three-dimensional (3-D) sceneries are ray-traced in CAD. The PV power output is then modeled by translating irradiance intensity data of rendered images back into numerical data. To ensure accurate results, the solar irradiation data used as input is compared to numerical data obtained from rendered images, showing excellent agreement. As expected, also ray-tracing precision in the CAD software proves to be very high. To demonstrate PV energy yield simulations using this innovativemore » concept, solar radiation time course data of a few days was modeled in 3-D to simulate distributions of irradiance incident on flat, single- and double-bend shapes and a PV powered computer mouse located on a window sill. Comparisons of measured to simulated PV output of the mouse show that also in practice, simulation accuracies can be very high. Theoretically, this concept has great potential, as it can be adapted to suit a wide range of solar energy applications, such as sun-tracking and concentrator systems, Building Integrated PV (BIPV) or Product Integrated PV (PIPV). However, graphical user interfaces of 'CAD-PV' software tools are not yet available. (author)« less

  3. Evaluation of Optimal Formulas for Gravitational Tensors up to Gravitational Curvatures of a Tesseroid

    NASA Astrophysics Data System (ADS)

    Deng, Xiao-Le; Shen, Wen-Bin

    2018-01-01

    The forward modeling of the topographic effects of the gravitational parameters in the gravity field is a fundamental topic in geodesy and geophysics. Since the gravitational effects, including for instance the gravitational potential (GP), the gravity vector (GV) and the gravity gradient tensor (GGT), of the topographic (or isostatic) mass reduction have been expanded by adding the gravitational curvatures (GC) in geoscience, it is crucial to find efficient numerical approaches to evaluate these effects. In this paper, the GC formulas of a tesseroid in Cartesian integral kernels are derived in 3D/2D forms. Three generally used numerical approaches for computing the topographic effects (e.g., GP, GV, GGT, GC) of a tesseroid are studied, including the Taylor Series Expansion (TSE), Gauss-Legendre Quadrature (GLQ) and Newton-Cotes Quadrature (NCQ) approaches. Numerical investigations show that the GC formulas in Cartesian integral kernels are more efficient if compared to the previously given GC formulas in spherical integral kernels: by exploiting the 3D TSE second-order formulas, the computational burden associated with the former is 46%, as an average, of that associated with the latter. The GLQ behaves better than the 3D/2D TSE and NCQ in terms of accuracy and computational time. In addition, the effects of a spherical shell's thickness and large-scale geocentric distance on the GP, GV, GGT and GC functionals have been studied with the 3D TSE second-order formulas as well. The relative approximation errors of the GC functionals are larger with the thicker spherical shell, which are the same as those of the GP, GV and GGT. Finally, the very-near-area problem and polar singularity problem have been considered by the numerical methods of the 3D TSE, GLQ and NCQ. The relative approximation errors of the GC components are larger than those of the GP, GV and GGT, especially at the very near area. Compared to the GC formulas in spherical integral kernels, these new GC formulas can avoid the polar singularity problem.

  4. Real-time realizations of the Bayesian Infrasonic Source Localization Method

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.

    2015-12-01

    The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.

  5. A Gas Dynamics Method Based on The Spectral Deferred Corrections (SDC) Time Integration Technique and The Piecewise Parabolic Method (PPM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samet Y. Kadioglu

    2011-12-01

    We present a computational gas dynamics method based on the Spectral Deferred Corrections (SDC) time integration technique and the Piecewise Parabolic Method (PPM) finite volume method. The PPM framework is used to define edge averaged quantities which are then used to evaluate numerical flux functions. The SDC technique is used to integrate solution in time. This kind of approach was first taken by Anita et al in [17]. However, [17] is problematic when it is implemented to certain shock problems. Here we propose significant improvements to [17]. The method is fourth order (both in space and time) for smooth flows,more » and provides highly resolved discontinuous solutions. We tested the method by solving variety of problems. Results indicate that the fourth order of accuracy in both space and time has been achieved when the flow is smooth. Results also demonstrate the shock capturing ability of the method.« less

  6. Numerical Boundary Condition Procedures

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.

  7. Exact nonstationary responses of rectangular thin plate on Pasternak foundation excited by stochastic moving loads

    NASA Astrophysics Data System (ADS)

    Chen, Guohai; Meng, Zeng; Yang, Dixiong

    2018-01-01

    This paper develops an efficient method termed as PE-PIM to address the exact nonstationary responses of pavement structure, which is modeled as a rectangular thin plate resting on bi-parametric Pasternak elastic foundation subjected to stochastic moving loads with constant acceleration. Firstly, analytical power spectral density (PSD) functions of random responses for thin plate are derived by integrating pseudo excitation method (PEM) with Duhamel's integral. Based on PEM, the new equivalent von Mises stress (NEVMS) is proposed, whose PSD function contains all cross-PSD functions between stress components. Then, the PE-PIM that combines the PEM with precise integration method (PIM) is presented to achieve efficiently stochastic responses of the plate by replacing Duhamel's integral with the PIM. Moreover, the semi-analytical Monte Carlo simulation is employed to verify the computational results of the developed PE-PIM. Finally, numerical examples demonstrate the high accuracy and efficiency of PE-PIM for nonstationary random vibration analysis. The effects of velocity and acceleration of moving load, boundary conditions of the plate and foundation stiffness on the deflection and NEVMS responses are scrutinized.

  8. Optical fringe-reflection deflectometry with sparse representation

    NASA Astrophysics Data System (ADS)

    Xiao, Yong-Liang; Li, Sikun; Zhang, Qican; Zhong, Jianxin; Su, Xianyu; You, Zhisheng

    2018-05-01

    Optical fringe-reflection deflectometry is a surprisingly attractive scratch detection technique for specular surfaces owing to its unparalleled local sensibility. Full-field surface topography is obtained from a measured normal field using gradient integration. However, there may not be an ideal measured gradient field for deflectometry reconstruction in practice. Both the non-integrability condition and various kinds of image noise distributions, which are present in the indirect measured gradient field, may lead to ambiguity about the scratches on specular surfaces. In order to reduce misjudgment of scratches, sparse representation is introduced into the Southwell curl equation for deflectometry. The curl can be represented as a linear combination of the given redundant dictionary for curl and the sparsest solution for gradient refinement. The non-integrability condition and noise permutation can be overcome with sparse representation for gradient refinement. Numerical simulations demonstrate that the accuracy rate of judgment of scratches can be enhanced with sparse representation compared to the standard least-squares integration. Preliminary experiments are performed with the application of practical measured deflectometric data to verify the validity of the algorithm.

  9. Accuracy evaluation of 3D lidar data from small UAV

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav

    2015-10-01

    A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.

  10. Unsteady numerical simulation of a round jet with impinging microjets for noise suppression

    PubMed Central

    Lew, Phoi-Tack; Najafi-Yazdi, Alireza; Mongeau, Luc

    2013-01-01

    The objective of this study was to determine the feasibility of a lattice-Boltzmann method (LBM)-Large Eddy Simulation methodology for the prediction of sound radiation from a round jet-microjet combination. The distinct advantage of LBM over traditional computational fluid dynamics methods is its ease of handling problems with complex geometries. Numerical simulations of an isothermal Mach 0.5, ReD = 1 × 105 circular jet (Dj = 0.0508 m) with and without the presence of 18 microjets (Dmj = 1 mm) were performed. The presence of microjets resulted in a decrease in the axial turbulence intensity and turbulent kinetic energy. The associated decrease in radiated sound pressure level was around 1 dB. The far-field sound was computed using the porous Ffowcs Williams-Hawkings surface integral acoustic method. The trend obtained is in qualitative agreement with experimental observations. The results of this study support the accuracy of LBM based numerical simulations for predictions of the effects of noise suppression devices on the radiated sound power. PMID:23967931

  11. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  12. Entropy Splitting for High Order Numerical Simulation of Vortex Sound at Low Mach Numbers

    NASA Technical Reports Server (NTRS)

    Mueller, B.; Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2001-01-01

    A method of minimizing numerical errors, and improving nonlinear stability and accuracy associated with low Mach number computational aeroacoustics (CAA) is proposed. The method consists of two levels. From the governing equation level, we condition the Euler equations in two steps. The first step is to split the inviscid flux derivatives into a conservative and a non-conservative portion that satisfies a so called generalized energy estimate. This involves the symmetrization of the Euler equations via a transformation of variables that are functions of the physical entropy. Owing to the large disparity of acoustic and stagnation quantities in low Mach number aeroacoustics, the second step is to reformulate the split Euler equations in perturbation form with the new unknowns as the small changes of the conservative variables with respect to their large stagnation values. From the numerical scheme level, a stable sixth-order central interior scheme with a third-order boundary schemes that satisfies the discrete analogue of the integration-by-parts procedure used in the continuous energy estimate (summation-by-parts property) is employed.

  13. A numerical study of diurnally varying surface temperature on flow patterns and pollutant dispersion in street canyons

    NASA Astrophysics Data System (ADS)

    Tan, Zijing; Dong, Jingliang; Xiao, Yimin; Tu, Jiyuan

    2015-03-01

    The impacts of the diurnal variation of surface temperature on street canyon flow pattern and pollutant dispersion are investigated based on a two-dimensional street canyon model under different thermal stratifications. Uneven distributed street temperature conditions and a user-defined wall function representing the heat transfer between the air and the street canyon are integrated into the current numerical model. The prediction accuracy of this model is successfully validated against a published wind tunnel experiment. Then, a series of numerical simulations representing four time scenarios (Morning, Afternoon, Noon and Night) are performed at different Bulk Richardson number (Rb). The results demonstrate that uneven distributed street temperature conditions significantly alters street canyon flow structure and pollutant dispersion characteristics compared with conventional uniform street temperature assumption, especially for the morning event. Moreover, air flow patterns and pollutant dispersion are greatly influenced by diurnal variation of surface temperature under unstable stratification conditions. Furthermore, the residual pollutant in near-ground-zone decreases as Rb increases in noon, afternoon and night events under all studied stability conditions.

  14. High output lamp with high brightness

    DOEpatents

    Kirkpatrick, Douglas A.; Bass, Gary K.; Copsey, Jesse F.; Garber, Jr., William E.; Kwong, Vincent H.; Levin, Izrail; MacLennan, Donald A.; Roy, Robert J.; Steiner, Paul E.; Tsai, Peter; Turner, Brian P.

    2002-01-01

    An ultra bright, low wattage inductively coupled electrodeless aperture lamp is powered by a solid state RF source in the range of several tens to several hundreds of watts at various frequencies in the range of 400 to 900 MHz. Numerous novel lamp circuits and components are disclosed including a wedding ring shaped coil having one axial and one radial lead, a high accuracy capacitor stack, a high thermal conductivity aperture cup and various other aperture bulb configurations, a coaxial capacitor arrangement, and an integrated coil and capacitor assembly. Numerous novel RF circuits are also disclosed including a high power oscillator circuit with reduced complexity resonant pole configuration, parallel RF power FET transistors with soft gate switching, a continuously variable frequency tuning circuit, a six port directional coupler, an impedance switching RF source, and an RF source with controlled frequency-load characteristics. Numerous novel RF control methods are disclosed including controlled adjustment of the operating frequency to find a resonant frequency and reduce reflected RF power, controlled switching of an impedance switched lamp system, active power control and active gate bias control.

  15. Virtual photons in imaginary time: Computing exact Casimir forces via standard numerical electromagnetism techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez, Alejandro; Ibanescu, Mihai; Joannopoulos, J. D.

    2007-09-15

    We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the fluctuation-dissipation theorem, is designed to directly exploit fast methods developed for classical computational electromagnetism, since it only involves repeated evaluation of the Green's function for imaginary frequencies (equivalently, real frequencies in imaginary time). We develop the approach by systematically examining various formulations of Casimir forces from the previous decades and evaluating them according to their suitability for numerical computation. We illustratemore » our approach with a simple finite-difference frequency-domain implementation, test it for known geometries such as a cylinder and a plate, and apply it to new geometries. In particular, we show that a pistonlike geometry of two squares sliding between metal walls, in both two and three dimensions with both perfect and realistic metallic materials, exhibits a surprising nonmonotonic ''lateral'' force from the walls.« less

  16. Computational issues in the simulation of two-dimensional discrete dislocation mechanics

    NASA Astrophysics Data System (ADS)

    Segurado, J.; LLorca, J.; Romero, I.

    2007-06-01

    The effect of the integration time step and the introduction of a cut-off velocity for the dislocation motion was analysed in discrete dislocation dynamics (DD) simulations of a single crystal microbeam. Two loading modes, bending and uniaxial tension, were examined. It was found that a longer integration time step led to a progressive increment of the oscillations in the numerical solution, which would eventually diverge. This problem could be corrected in the simulations carried out in bending by introducing a cut-off velocity for the dislocation motion. This strategy (long integration times and a cut-off velocity for the dislocation motion) did not recover, however, the solution computed with very short time steps in uniaxial tension: the dislocation density was overestimated and the dislocation patterns modified. The different response to the same numerical algorithm was explained in terms of the nature of the dislocations generated in each case: geometrically necessary in bending and statistically stored in tension. The evolution of the dislocation density in the former was controlled by the plastic curvature of the beam and was independent of the details of the simulations. On the contrary, the steady-state dislocation density in tension was determined by the balance between nucleation of dislocations and those which are annihilated or which exit the beam. Changes in the DD imposed by the cut-off velocity altered this equilibrium and the solution. These results point to the need for detailed analyses of the accuracy and stability of the dislocation dynamic simulations to ensure that the results obtained are not fundamentally affected by the numerical strategies used to solve this complex problem.

  17. A novel method for calculating relative free energy of similar molecules in two environments

    NASA Astrophysics Data System (ADS)

    Farhi, Asaf; Singh, Bipin

    2017-03-01

    Calculating relative free energies is a topic of substantial interest and has many applications including solvation and binding free energies, which are used in computational drug discovery. However, there remain the challenges of accuracy, simple implementation, robustness and efficiency, which prevent the calculations from being automated and limit their use. Here we present an exact and complete decoupling analysis in which the partition functions of the compared systems decompose into the partition functions of the common and different subsystems. This decoupling analysis is applicable to submolecules with coupled degrees of freedom such as the methyl group and to any potential function (including the typical dihedral potentials), enabling to remove less terms in the transformation which results in a more efficient calculation. Then we show mathematically, in the context of partition function decoupling, that the two compared systems can be simulated separately, eliminating the need to design a composite system. We demonstrate the decoupling analysis and the separate transformations in a relative free energy calculation using MD simulations for a general force field and compare to another calculation and to experimental results. We present a unified soft-core technique that ensures the monotonicity of the numerically integrated function (analytical proof) which is important for the selection of intermediates. We show mathematically that in this soft-core technique the numerically integrated function can be non-steep only when we transform the systems separately, which can simplify the numerical integration. Finally, we show that when the systems have rugged energy landscape they can be equilibrated without introducing another sampling dimension which can also enable to use the simulation results for other free energy calculations.

  18. A New Integrated Weighted Model in SNOW-V10: Verification of Categorical Variables

    NASA Astrophysics Data System (ADS)

    Huang, Laura X.; Isaac, George A.; Sheng, Grant

    2014-01-01

    This paper presents the verification results for nowcasts of seven categorical variables from an integrated weighted model (INTW) and the underlying numerical weather prediction (NWP) models. Nowcasting, or short range forecasting (0-6 h), over complex terrain with sufficient accuracy is highly desirable but a very challenging task. A weighting, evaluation, bias correction and integration system (WEBIS) for generating nowcasts by integrating NWP forecasts and high frequency observations was used during the Vancouver 2010 Olympic and Paralympic Winter Games as part of the Science of Nowcasting Olympic Weather for Vancouver 2010 (SNOW-V10) project. Forecast data from Canadian high-resolution deterministic NWP system with three nested grids (at 15-, 2.5- and 1-km horizontal grid-spacing) were selected as background gridded data for generating the integrated nowcasts. Seven forecast variables of temperature, relative humidity, wind speed, wind gust, visibility, ceiling and precipitation rate are treated as categorical variables for verifying the integrated weighted forecasts. By analyzing the verification of forecasts from INTW and the NWP models among 15 sites, the integrated weighted model was found to produce more accurate forecasts for the 7 selected forecast variables, regardless of location. This is based on the multi-categorical Heidke skill scores for the test period 12 February to 21 March 2010.

  19. Efficient uncertainty quantification in fully-integrated surface and subsurface hydrologic simulations

    NASA Astrophysics Data System (ADS)

    Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.

    2018-01-01

    Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at 241 different times each. Numerical experiments show that polynomial chaos is an effective and robust method for quantifying uncertainty in fully-integrated hydrologic simulations, which provides a rich set of features and is computationally efficient. Our approach has the potential for significant speedup over existing sampling based methods when the number of uncertain model parameters is modest ( ≤ 20). To our knowledge, this is the first implementation of the algorithm in a comprehensive, fully-integrated, physically-based three-dimensional hydrosystem model.

  20. A discontinuous Galerkin method for the shallow water equations in spherical triangular coordinates

    NASA Astrophysics Data System (ADS)

    Läuter, Matthias; Giraldo, Francis X.; Handorf, Dörthe; Dethloff, Klaus

    2008-12-01

    A global model of the atmosphere is presented governed by the shallow water equations and discretized by a Runge-Kutta discontinuous Galerkin method on an unstructured triangular grid. The shallow water equations on the sphere, a two-dimensional surface in R3, are locally represented in terms of spherical triangular coordinates, the appropriate local coordinate mappings on triangles. On every triangular grid element, this leads to a two-dimensional representation of tangential momentum and therefore only two discrete momentum equations. The discontinuous Galerkin method consists of an integral formulation which requires both area (elements) and line (element faces) integrals. Here, we use a Rusanov numerical flux to resolve the discontinuous fluxes at the element faces. A strong stability-preserving third-order Runge-Kutta method is applied for the time discretization. The polynomial space of order k on each curved triangle of the grid is characterized by a Lagrange basis and requires high-order quadature rules for the integration over elements and element faces. For the presented method no mass matrix inversion is necessary, except in a preprocessing step. The validation of the atmospheric model has been done considering standard tests from Williamson et al. [D.L. Williamson, J.B. Drake, J.J. Hack, R. Jakob, P.N. Swarztrauber, A standard test set for numerical approximations to the shallow water equations in spherical geometry, J. Comput. Phys. 102 (1992) 211-224], unsteady analytical solutions of the nonlinear shallow water equations and a barotropic instability caused by an initial perturbation of a jet stream. A convergence rate of O(Δx) was observed in the model experiments. Furthermore, a numerical experiment is presented, for which the third-order time-integration method limits the model error. Thus, the time step Δt is restricted by both the CFL-condition and accuracy demands. Conservation of mass was shown up to machine precision and energy conservation converges for both increasing grid resolution and increasing polynomial order k.

  1. An exact solution for ideal dam-break floods on steep slopes

    USGS Publications Warehouse

    Ancey, C.; Iverson, R.M.; Rentschler, M.; Denlinger, R.P.

    2008-01-01

    The shallow-water equations are used to model the flow resulting from the sudden release of a finite volume of frictionless, incompressible fluid down a uniform slope of arbitrary inclination. The hodograph transformation and Riemann's method make it possible to transform the governing equations into a linear system and then deduce an exact analytical solution expressed in terms of readily evaluated integrals. Although the solution treats an idealized case never strictly realized in nature, it is uniquely well-suited for testing the robustness and accuracy of numerical models used to model shallow-water flows on steep slopes. Copyright 2008 by the American Geophysical Union.

  2. A Numerical Scheme for Ordinary Differential Equations Having Time Varying and Nonlinear Coefficients Based on the State Transition Matrix

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2002-01-01

    A variable order method of integrating initial value ordinary differential equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. While it is more complex than most other methods, it produces exact solutions at arbitrary time step size when the time variation of the system can be modeled exactly by a polynomial. Solutions to several nonlinear problems exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with an exact solution and with solutions obtained by established methods.

  3. Improving the Numerical Stability of Fast Matrix Multiplication

    DOE PAGES

    Ballard, Grey; Benson, Austin R.; Druinsky, Alex; ...

    2016-10-04

    Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this study that the numerical sacrifice of fastmore » algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. In conclusion, we benchmark performance and demonstrate our improved numerical accuracy.« less

  4. FOLDER: A numerical tool to simulate the development of structures in layered media

    NASA Astrophysics Data System (ADS)

    Adamuszek, Marta; Dabrowski, Marcin; Schmid, Daniel W.

    2015-04-01

    FOLDER is a numerical toolbox for modelling deformation in layered media during layer parallel shortening or extension in two dimensions. FOLDER builds on MILAMIN [1], a finite element method based mechanical solver, with a range of utilities included from the MUTILS package [2]. Numerical mesh is generated using the Triangle software [3]. The toolbox includes features that allow for: 1) designing complex structures such as multi-layer stacks, 2) accurately simulating large-strain deformation of linear and non-linear viscous materials, 3) post-processing of various physical fields such as velocity (total and perturbing), rate of deformation, finite strain, stress, deviatoric stress, pressure, apparent viscosity. FOLDER is designed to ensure maximum flexibility to configure model geometry, define material parameters, specify range of numerical parameters in simulations and choose the plotting options. FOLDER is an open source MATLAB application and comes with a user friendly graphical interface. The toolbox additionally comprises an educational application that illustrates various analytical solutions of growth rates calculated for the cases of folding and necking of a single layer with interfaces perturbed with a single sinusoidal waveform. We further derive two novel analytical expressions for the growth rate in the cases of folding and necking of a linear viscous layer embedded in a linear viscous medium of a finite thickness. We use FOLDER to test the accuracy of single-layer folding simulations using various 1) spatial and temporal resolutions, 2) time integration schemes, and 3) iterative algorithms for non-linear materials. The accuracy of the numerical results is quantified by: 1) comparing them to analytical solution, if available, or 2) running convergence tests. As a result, we provide a map of the most optimal choice of grid size, time step, and number of iterations to keep the results of the numerical simulations below a given error for a given time integration scheme. We also demonstrate that Euler and Leapfrog time integration schemes are not recommended for any practical use. Finally, the capabilities of the toolbox are illustrated based on two examples: 1) shortening of a synthetic multi-layer sequence and 2) extension of a folded quartz vein embedded in phyllite from Sprague Upper Reservoir (example discussed by Sherwin and Chapple [4]). The latter example demonstrates that FOLDER can be successfully used for reverse modelling and mechanical restoration. [1] Dabrowski, M., Krotkiewski, M., and Schmid, D. W., 2008, MILAMIN: MATLAB-based finite element method solver for large problems. Geochemistry Geophysics Geosystems, vol. 9. [2] Krotkiewski, M. and Dabrowski M., 2010 Parallel symmetric sparse matrix-vector product on scalar multi-core cpus. Parallel Computing, 36(4):181-198 [3] Shewchuk, J. R., 1996, Triangle: Engineering a 2D Quality Mesh Generator and Delaunay Triangulator, In: Applied Computational Geometry: Towards Geometric Engineering'' (Ming C. Lin and Dinesh Manocha, editors), Vol. 1148 of Lecture Notes in Computer Science, pp. 203-222, Springer-Verlag, Berlin [4] Sherwin, J.A., Chapple, W.M., 1968. Wavelengths of single layer folds - a Comparison between theory and Observation. American Journal of Science 266 (3), p. 167-179

  5. Grid refinement in Cartesian coordinates for groundwater flow models using the divergence theorem and Taylor's series.

    PubMed

    Mansour, M M; Spink, A E F

    2013-01-01

    Grid refinement is introduced in a numerical groundwater model to increase the accuracy of the solution over local areas without compromising the run time of the model. Numerical methods developed for grid refinement suffered certain drawbacks, for example, deficiencies in the implemented interpolation technique; the non-reciprocity in head calculations or flow calculations; lack of accuracy resulting from high truncation errors, and numerical problems resulting from the construction of elongated meshes. A refinement scheme based on the divergence theorem and Taylor's expansions is presented in this article. This scheme is based on the work of De Marsily (1986) but includes more terms of the Taylor's series to improve the numerical solution. In this scheme, flow reciprocity is maintained and high order of refinement was achievable. The new numerical method is applied to simulate groundwater flows in homogeneous and heterogeneous confined aquifers. It produced results with acceptable degrees of accuracy. This method shows the potential for its application to solving groundwater heads over nested meshes with irregular shapes. © 2012, British Geological Survey © NERC 2012. Ground Water © 2012, National GroundWater Association.

  6. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    NASA Astrophysics Data System (ADS)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  7. A new numerically stable implementation of the T-matrix method for electromagnetic scattering by spheroidal particles

    NASA Astrophysics Data System (ADS)

    Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.

    2013-07-01

    We propose, describe, and demonstrate a new numerically stable implementation of the extended boundary-condition method (EBCM) to compute the T-matrix for electromagnetic scattering by spheroidal particles. Our approach relies on the fact that for many of the EBCM integrals in the special case of spheroids, a leading part of the integrand integrates exactly to zero, which causes catastrophic loss of precision in numerical computations. This feature was in fact first pointed out by Waterman in the context of acoustic scattering and electromagnetic scattering by infinite cylinders. We have recently studied it in detail in the case of electromagnetic scattering by particles. Based on this study, the principle of our new implementation is therefore to compute all the integrands without the problematic part to avoid the primary cause of loss of precision. Particular attention is also given to choosing the algorithms that minimise loss of precision in every step of the method, without compromising on speed. We show that the resulting implementation can efficiently compute in double precision arithmetic the T-matrix and therefore optical properties of spheroidal particles to a high precision, often down to a remarkable accuracy (10-10 relative error), over a wide range of parameters that are typically considered problematic. We discuss examples such as high-aspect ratio metallic nanorods and large size parameter (≈35) dielectric particles, which had been previously modelled only using quadruple-precision arithmetic codes.

  8. An optimal implicit staggered-grid finite-difference scheme based on the modified Taylor-series expansion with minimax approximation method for elastic modeling

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2017-03-01

    Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.

  9. Robust numerical electromagnetic eigenfunction expansion algorithms

    NASA Astrophysics Data System (ADS)

    Sainath, Kamalesh

    This thesis summarizes developments in rigorous, full-wave, numerical spectral-domain (integral plane wave eigenfunction expansion [PWE]) evaluation algorithms concerning time-harmonic electromagnetic (EM) fields radiated by generally-oriented and positioned sources within planar and tilted-planar layered media exhibiting general anisotropy, thickness, layer number, and loss characteristics. The work is motivated by the need to accurately and rapidly model EM fields radiated by subsurface geophysical exploration sensors probing layered, conductive media, where complex geophysical and man-made processes can lead to micro-laminate and micro-fractured geophysical formations exhibiting, at the lower (sub-2MHz) frequencies typically employed for deep EM wave penetration through conductive geophysical media, bulk-scale anisotropic (i.e., directional) electrical conductivity characteristics. When the planar-layered approximation (layers of piecewise-constant material variation and transversely-infinite spatial extent) is locally, near the sensor region, considered valid, numerical spectral-domain algorithms are suitable due to their strong low-frequency stability characteristic, and ability to numerically predict time-harmonic EM field propagation in media with response characterized by arbitrarily lossy and (diagonalizable) dense, anisotropic tensors. If certain practical limitations are addressed, PWE can robustly model sensors with general position and orientation that probe generally numerous, anisotropic, lossy, and thick layers. The main thesis contributions, leading to a sensor and geophysical environment-robust numerical modeling algorithm, are as follows: (1) Simple, rapid estimator of the region (within the complex plane) containing poles, branch points, and branch cuts (critical points) (Chapter 2), (2) Sensor and material-adaptive azimuthal coordinate rotation, integration contour deformation, integration domain sub-region partition and sub-region-dependent integration order (Chapter 3), (3) Integration partition-extrapolation-based (Chapter 3) and Gauss-Laguerre Quadrature (GLQ)-based (Chapter 4) evaluations of the deformed, semi-infinite-length integration contour tails, (4) Robust in-situ-based (i.e., at the spectral-domain integrand level) direct/homogeneous-medium field contribution subtraction and analytical curbing of the source current spatial spectrum function's ill behavior (Chapter 5), and (5) Analytical re-casting of the direct-field expressions when the source is embedded within a NBAM, short for non-birefringent anisotropic medium (Chapter 6). The benefits of these contributions are, respectively, (1) Avoiding computationally intensive critical-point location and tracking (computation time savings), (2) Sensor and material-robust curbing of the integrand's oscillatory and slow decay behavior, as well as preventing undesirable critical-point migration within the complex plane (computation speed, precision, and instability-avoidance benefits), (3) sensor and material-robust reduction (or, for GLQ, elimination) of integral truncation error, (4) robustly stable modeling of scattered fields and/or fields radiated from current sources modeled as spatially distributed (10 to 1000-fold compute-speed acceleration also realized for distributed-source computations), and (5) numerically stable modeling of fields radiated from sources within NBAM layers. Having addressed these limitations, are PWE algorithms applicable to modeling EM waves in tilted planar-layered geometries too? This question is explored in Chapter 7 using a Transformation Optics-based approach, allowing one to model wave propagation through layered media that (in the sensor's vicinity) possess tilted planar interfaces. The technique leads to spurious wave scattering however, whose induced computation accuracy degradation requires analysis. Mathematical exhibition, and exhaustive simulation-based study and analysis of the limitations of, this novel tilted-layer modeling formulation is Chapter 7's main contribution.

  10. Contact-aware simulations of particulate Stokesian suspensions

    NASA Astrophysics Data System (ADS)

    Lu, Libin; Rahimian, Abtin; Zorin, Denis

    2017-10-01

    We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.

  11. System of experts for intelligent data management (SEIDAM)

    NASA Technical Reports Server (NTRS)

    Goodenough, David G.; Iisaka, Joji; Fung, KO

    1993-01-01

    A proposal to conduct research and development on a system of expert systems for intelligent data management (SEIDAM) is being developed. CCRS has much expertise in developing systems for integrating geographic information with space and aircraft remote sensing data and in managing large archives of remotely sensed data. SEIDAM will be composed of expert systems grouped in three levels. At the lowest level, the expert systems will manage and integrate data from diverse sources, taking account of symbolic representation differences and varying accuracies. Existing software can be controlled by these expert systems, without rewriting existing software into an Artificial Intelligence (AI) language. At the second level, SEIDAM will take the interpreted data (symbolic and numerical) and combine these with data models. at the top level, SEIDAM will respond to user goals for predictive outcomes given existing data. The SEIDAM Project will address the research areas of expert systems, data management, storage and retrieval, and user access and interfaces.

  12. System of Experts for Intelligent Data Management (SEIDAM)

    NASA Technical Reports Server (NTRS)

    Goodenough, David G.; Iisaka, Joji; Fung, KO

    1992-01-01

    It is proposed to conduct research and development on a system of expert systems for intelligent data management (SEIDAM). CCRS has much expertise in developing systems for integrating geographic information with space and aircraft remote sensing data and in managing large archives of remotely sensed data. SEIDAM will be composed of expert systems grouped in three levels. At the lowest level, the expert systems will manage and integrate data from diverse sources, taking account of symbolic representation differences and varying accuracies. Existing software can be controlled by these expert systems, without rewriting existing software into an Artificial Intelligence (AI) language. At the second level, SEIDAM will take the interpreted data (symbolic and numerical) and combine these with data models. At the top level, SEIDAM will respond to user goals for predictive outcomes given existing data. The SEIDAM Project will address the research areas of expert systems, data management, storage and retrieval, and user access and interfaces.

  13. Indirect boundary force measurements in beam-like structures using a derivative estimator

    NASA Astrophysics Data System (ADS)

    Chesne, Simon

    2014-12-01

    This paper proposes a new method for the identification of boundary forces (shear force or bending moment) in a beam, based on displacement measurements. The problem is considered in terms of the determination of the boundary spatial derivatives of transverse displacements. By assuming the displacement fields to be approximated by Taylor expansions in a domain close to the boundaries, the spatial derivatives can be estimated using specific point-wise derivative estimators. This approach makes it possible to extract the derivatives using a weighted spatial integration of the displacement field. Following the theoretical description, numerical simulations made with exact and noisy data are used to determine the relationship between the size of the integration domain and the wavelength of the vibrations. The simulations also highlight the self-regularization of the technique. Experimental measurements demonstrate the feasibility and accuracy of the proposed method.

  14. Arbitrary Lagrangian-Eulerian Method with Local Structured Adaptive Mesh Refinement for Modeling Shock Hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, R W; Pember, R B; Elliott, N S

    2001-10-22

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditionalmore » AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.« less

  15. A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries.

    PubMed

    Ge, Liang; Sotiropoulos, Fotis

    2007-08-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.

  16. A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries

    PubMed Central

    Ge, Liang; Sotiropoulos, Fotis

    2008-01-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus. PMID:19194533

  17. Fast maximum likelihood estimation using continuous-time neural point process models.

    PubMed

    Lepage, Kyle Q; MacDonald, Christopher J

    2015-06-01

    A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.

  18. CdSe/ZnS quantum dot fluorescence spectra shape-based thermometry via neural network reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munro, Troy; Laboratory of Soft Matter and Biophysics, Department of Physics and Astronomy, KU Leuven, Celestijnenlaan 200D, B-3001 Heverlee; Liu, Liwang

    As a system of interest gets small, due to the influence of the sensor mass and heat leaks through the sensor contacts, thermal characterization by means of contact temperature measurements becomes cumbersome. Non-contact temperature measurement offers a suitable alternative, provided a reliable relationship between the temperature and the detected signal is available. In this work, exploiting the temperature dependence of their fluorescence spectrum, the use of quantum dots as thermomarkers on the surface of a fiber of interest is demonstrated. The performance is assessed of a series of neural networks that use different spectral shape characteristics as inputs (peak-based—peak intensity,more » peak wavelength; shape-based—integrated intensity, their ratio, full-width half maximum, peak normalized intensity at certain wavelengths, and summation of intensity over several spectral bands) and that yield at their output the fiber temperature in the optically probed area on a spider silk fiber. Starting from neural networks trained on fluorescence spectra acquired in steady state temperature conditions, numerical simulations are performed to assess the quality of the reconstruction of dynamical temperature changes that are photothermally induced by illuminating the fiber with periodically intensity-modulated light. Comparison of the five neural networks investigated to multiple types of curve fits showed that using neural networks trained on a combination of the spectral characteristics improves the accuracy over use of a single independent input, with the greatest accuracy observed for inputs that included both intensity-based measurements (peak intensity) and shape-based measurements (normalized intensity at multiple wavelengths), with an ultimate accuracy of 0.29 K via numerical simulation based on experimental observations. The implications are that quantum dots can be used as a more stable and accurate fluorescence thermometer for solid materials and that use of neural networks for temperature reconstruction improves the accuracy of the measurement.« less

  19. Two high accuracy digital integrators for Rogowski current transducers.

    PubMed

    Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua

    2014-01-01

    The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.

  20. Two high accuracy digital integrators for Rogowski current transducers

    NASA Astrophysics Data System (ADS)

    Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua

    2014-01-01

    The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.

  1. Answering the missed call: Initial exploration of cognitive and electrophysiological changes associated with smartphone use and abuse.

    PubMed

    Hadar, Aviad; Hadas, Itay; Lazarovits, Avi; Alyagon, Uri; Eliraz, Daniel; Zangen, Abraham

    2017-01-01

    Smartphone usage is now integral to human behavior. Recent studies associate extensive usage with a range of debilitating effects. We sought to determine whether excessive usage is accompanied by measurable neural, cognitive and behavioral changes. Subjects lacking previous experience with smartphones (n = 35) were compared to a matched group of heavy smartphone users (n = 16) on numerous behavioral and electrophysiological measures recorded using electroencephalogram (EEG) combined with transcranial magnetic stimulation (TMS) over the right prefrontal cortex (rPFC). In a second longitudinal intervention, a randomly selected sample of the original non-users received smartphones for 3 months while the others served as controls. All measurements were repeated following this intervention. Heavy users showed increased impulsivity, hyperactivity and negative social concern. We also found reduced early TMS evoked potentials in the rPFC of this group, which correlated with severity of self-reported inattention problems. Heavy users also obtained lower accuracy rates than nonusers in a numerical processing. Critically, the second part of the experiment revealed that both the numerical processing and social cognition domains are causally linked to smartphone usage. Heavy usage was found to be associated with impaired attention, reduced numerical processing capacity, changes in social cognition, and reduced right prefrontal cortex (rPFC) excitability. Memory impairments were not detected. Novel usage over short period induced a significant reduction in numerical processing capacity and changes in social cognition.

  2. Parallel Solver for Diffuse Optical Tomography on Realistic Head Models With Scattering and Clear Regions.

    PubMed

    Placati, Silvio; Guermandi, Marco; Samore, Andrea; Scarselli, Eleonora Franchi; Guerrieri, Roberto

    2016-09-01

    Diffuse optical tomography is an imaging technique, based on evaluation of how light propagates within the human head to obtain the functional information about the brain. Precision in reconstructing such an optical properties map is highly affected by the accuracy of the light propagation model implemented, which needs to take into account the presence of clear and scattering tissues. We present a numerical solver based on the radiosity-diffusion model, integrating the anatomical information provided by a structural MRI. The solver is designed to run on parallel heterogeneous platforms based on multiple GPUs and CPUs. We demonstrate how the solver provides a 7 times speed-up over an isotropic-scattered parallel Monte Carlo engine based on a radiative transport equation for a domain composed of 2 million voxels, along with a significant improvement in accuracy. The speed-up greatly increases for larger domains, allowing us to compute the light distribution of a full human head ( ≈ 3 million voxels) in 116 s for the platform used.

  3. Improving Children’s Knowledge of Fraction Magnitudes

    PubMed Central

    Fazio, Lisa K.; Kennedy, Casey A.; Siegler, Robert S.

    2016-01-01

    We examined whether playing a computerized fraction game, based on the integrated theory of numerical development and on the Common Core State Standards’ suggestions for teaching fractions, would improve children’s fraction magnitude understanding. Fourth and fifth-graders were given brief instruction about unit fractions and played Catch the Monster with Fractions, a game in which they estimated fraction locations on a number line and received feedback on the accuracy of their estimates. The intervention lasted less than 15 minutes. In our initial study, children showed large gains from pretest to posttest in their fraction number line estimates, magnitude comparisons, and recall accuracy. In a more rigorous second study, the experimental group showed similarly large improvements, whereas a control group showed no improvement from practicing fraction number line estimates without feedback. The results provide evidence for the effectiveness of interventions emphasizing fraction magnitudes and indicate how psychological theories and research can be used to evaluate specific recommendations of the Common Core State Standards. PMID:27768756

  4. A third-order implicit discontinuous Galerkin method based on a Hermite WENO reconstruction for time-accurate solution of the compressible Navier-Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong; Liu, Xiaodong; Luo, Hong

    2015-06-01

    Here, a space and time third-order discontinuous Galerkin method based on a Hermite weighted essentially non-oscillatory reconstruction is presented for the unsteady compressible Euler and Navier–Stokes equations. At each time step, a lower-upper symmetric Gauss–Seidel preconditioned generalized minimal residual solver is used to solve the systems of linear equations arising from an explicit first stage, single diagonal coefficient, diagonally implicit Runge–Kutta time integration scheme. The performance of the developed method is assessed through a variety of unsteady flow problems. Numerical results indicate that this method is able to deliver the designed third-order accuracy of convergence in both space and time,more » while requiring remarkably less storage than the standard third-order discontinous Galerkin methods, and less computing time than the lower-order discontinous Galerkin methods to achieve the same level of temporal accuracy for computing unsteady flow problems.« less

  5. A study of hierarchical clustering of galaxies in an expanding universe

    NASA Astrophysics Data System (ADS)

    Porter, D. H.

    The nonlinear hierarchical clustering of galaxies in an Einstein-deSitter (Omega = 1), initially white noise mass fluctuations (n = 0) model universe is investigated and shown to be in contradiction with previous results. The model is done in terms of an 11,000-body numerical simulation. The independent statics of 0.72 million particles are used to simulte the boundary conditions. A new method for integrating the Newtonian N-body gravity equations, which has controllable accuracy, incorporates a recursive center of mass reduction, and regularizes two body encounters is used to do the simulation. The coordinate system used here is well suited for the investigation of galaxy clustering, incorporating the independent positions and velocities of an arbitrary number of particles into a logarithmic hierarchy of center of mass nodes. The boundary for the simulation is created by using this hierarchy to map the independent statics of 0.72 million particles into just 4,000 particles. This method for simulating the boundary conditions also has controllable accuracy.

  6. Normative Data on Audiovisual Speech Integration Using Sentence Recognition and Capacity Measures

    PubMed Central

    Altieri, Nicholas; Hudock, Daniel

    2016-01-01

    Objective The ability to use visual speech cues and integrate them with auditory information is important, especially in noisy environments and for hearing-impaired (HI) listeners. Providing data on measures of integration skills that encompass accuracy and processing speed will benefit researchers and clinicians. Design The study consisted of two experiments: First, accuracy scores were obtained using CUNY sentences, and capacity measures that assessed reaction-time distributions were obtained from a monosyllabic word recognition task. Study Sample We report data on two measures of integration obtained from a sample comprised of 86 young and middle-age adult listeners: Results To summarize our results, capacity showed a positive correlation with accuracy measures of audiovisual benefit obtained from sentence recognition. More relevant, factor analysis indicated that a single-factor model captured audiovisual speech integration better than models containing more factors. Capacity exhibited strong loadings on the factor, while the accuracy-based measures from sentence recognition exhibited weaker loadings. Conclusions Results suggest that a listener’s integration skills may be assessed optimally using a measure that incorporates both processing speed and accuracy. PMID:26853446

  7. Normative data on audiovisual speech integration using sentence recognition and capacity measures.

    PubMed

    Altieri, Nicholas; Hudock, Daniel

    2016-01-01

    The ability to use visual speech cues and integrate them with auditory information is important, especially in noisy environments and for hearing-impaired (HI) listeners. Providing data on measures of integration skills that encompass accuracy and processing speed will benefit researchers and clinicians. The study consisted of two experiments: First, accuracy scores were obtained using City University of New York (CUNY) sentences, and capacity measures that assessed reaction-time distributions were obtained from a monosyllabic word recognition task. We report data on two measures of integration obtained from a sample comprised of 86 young and middle-age adult listeners: To summarize our results, capacity showed a positive correlation with accuracy measures of audiovisual benefit obtained from sentence recognition. More relevant, factor analysis indicated that a single-factor model captured audiovisual speech integration better than models containing more factors. Capacity exhibited strong loadings on the factor, while the accuracy-based measures from sentence recognition exhibited weaker loadings. Results suggest that a listener's integration skills may be assessed optimally using a measure that incorporates both processing speed and accuracy.

  8. A reaction-based paradigm to model reactive chemical transport in groundwater with general kinetic and equilibrium reactions.

    PubMed

    Zhang, Fan; Yeh, Gour-Tsyh; Parker, Jack C; Brooks, Scott C; Pace, Molly N; Kim, Young-Jin; Jardine, Philip M; Watson, David B

    2007-06-16

    This paper presents a reaction-based water quality transport model in subsurface flow systems. Transport of chemical species with a variety of chemical and physical processes is mathematically described by M partial differential equations (PDEs). Decomposition via Gauss-Jordan column reduction of the reaction network transforms M species reactive transport equations into two sets of equations: a set of thermodynamic equilibrium equations representing N(E) equilibrium reactions and a set of reactive transport equations of M-N(E) kinetic-variables involving no equilibrium reactions (a kinetic-variable is a linear combination of species). The elimination of equilibrium reactions from reactive transport equations allows robust and efficient numerical integration. The model solves the PDEs of kinetic-variables rather than individual chemical species, which reduces the number of reactive transport equations and simplifies the reaction terms in the equations. A variety of numerical methods are investigated for solving the coupled transport and reaction equations. Simulation comparisons with exact solutions were performed to verify numerical accuracy and assess the effectiveness of various numerical strategies to deal with different application circumstances. Two validation examples involving simulations of uranium transport in soil columns are presented to evaluate the ability of the model to simulate reactive transport with complex reaction networks involving both kinetic and equilibrium reactions.

  9. Sensitivity Analysis and Accuracy of a CFD-TFM Approach to Bubbling Bed Using Pressure Drop Fluctuations

    PubMed Central

    Tricomi, Leonardo; Melchiori, Tommaso; Chiaramonti, David; Boulet, Micaël; Lavoie, Jean Michel

    2017-01-01

    Based upon the two fluid model (TFM) theory, a CFD model was implemented to investigate a cold multiphase-fluidized bubbling bed reactor. The key variable used to characterize the fluid dynamic of the experimental system, and compare it to model predictions, was the time-pressure drop induced by the bubble motion across the bed. This time signal was then processed to obtain the power spectral density (PSD) distribution of pressure fluctuations. As an important aspect of this work, the effect of the sampling time scale on the empirical power spectral density (PSD) was investigated. A time scale of 40 s was found to be a good compromise ensuring both simulation performance and numerical validation consistency. The CFD model was first numerically verified by mesh refinement process, after what it was used to investigate the sensitivity with regards to minimum fluidization velocity (as a calibration point for drag law), restitution coefficient, and solid pressure term while assessing his accuracy in matching the empirical PSD. The 2D model provided a fair match with the empirical time-averaged pressure drop, the relating fluctuations amplitude, and the signal’s energy computed as integral of the PSD. A 3D version of the TFM was also used and it improved the match with the empirical PSD in the very first part of the frequency spectrum. PMID:28695119

  10. Sensitivity Analysis and Accuracy of a CFD-TFM Approach to Bubbling Bed Using Pressure Drop Fluctuations.

    PubMed

    Tricomi, Leonardo; Melchiori, Tommaso; Chiaramonti, David; Boulet, Micaël; Lavoie, Jean Michel

    2017-01-01

    Based upon the two fluid model (TFM) theory, a CFD model was implemented to investigate a cold multiphase-fluidized bubbling bed reactor. The key variable used to characterize the fluid dynamic of the experimental system, and compare it to model predictions, was the time-pressure drop induced by the bubble motion across the bed. This time signal was then processed to obtain the power spectral density (PSD) distribution of pressure fluctuations. As an important aspect of this work, the effect of the sampling time scale on the empirical power spectral density (PSD) was investigated. A time scale of 40 s was found to be a good compromise ensuring both simulation performance and numerical validation consistency. The CFD model was first numerically verified by mesh refinement process, after what it was used to investigate the sensitivity with regards to minimum fluidization velocity (as a calibration point for drag law), restitution coefficient, and solid pressure term while assessing his accuracy in matching the empirical PSD. The 2D model provided a fair match with the empirical time-averaged pressure drop, the relating fluctuations amplitude, and the signal's energy computed as integral of the PSD. A 3D version of the TFM was also used and it improved the match with the empirical PSD in the very first part of the frequency spectrum.

  11. Dynamic adaptive chemistry with operator splitting schemes for reactive flow simulations

    NASA Astrophysics Data System (ADS)

    Ren, Zhuyin; Xu, Chao; Lu, Tianfeng; Singer, Michael A.

    2014-04-01

    A numerical technique that uses dynamic adaptive chemistry (DAC) with operator splitting schemes to solve the equations governing reactive flows is developed and demonstrated. Strang-based splitting schemes are used to separate the governing equations into transport fractional substeps and chemical reaction fractional substeps. The DAC method expedites the numerical integration of reaction fractional substeps by using locally valid skeletal mechanisms that are obtained using the directed relation graph (DRG) reduction method to eliminate unimportant species and reactions from the full mechanism. Second-order temporal accuracy of the Strang-based splitting schemes with DAC is demonstrated on one-dimensional, unsteady, freely-propagating, premixed methane/air laminar flames with detailed chemical kinetics and realistic transport. The use of DAC dramatically reduces the CPU time required to perform the simulation, and there is minimal impact on solution accuracy. It is shown that with DAC the starting species and resulting skeletal mechanisms strongly depend on the local composition in the flames. In addition, the number of retained species may be significant only near the flame front region where chemical reactions are significant. For the one-dimensional methane/air flame considered, speed-up factors of three and five are achieved over the entire simulation for GRI-Mech 3.0 and USC-Mech II, respectively. Greater speed-up factors are expected for larger chemical kinetics mechanisms.

  12. Numerical integration of asymptotic solutions of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1989-01-01

    Classical asymptotic analysis of ordinary differential equations derives approximate solutions that are numerically stable. However, the analysis also leads to tedious expansions in powers of the relevant parameter for a particular problem. The expansions are replaced with integrals that can be evaluated by numerical integration. The resulting numerical solutions retain the linear independence that is the main advantage of asymptotic solutions. Examples, including the Falkner-Skan equation from laminar boundary layer theory, illustrate the method of asymptotic analysis with numerical integration.

  13. Integrated analysis of particle interactions at hadron colliders Report of research activities in 2010-2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nadolsky, Pavel M.

    2015-08-31

    The report summarizes research activities of the project ”Integrated analysis of particle interactions” at Southern Methodist University, funded by 2010 DOE Early Career Research Award DE-SC0003870. The goal of the project is to provide state-of-the-art predictions in quantum chromodynamics in order to achieve objectives of the LHC program for studies of electroweak symmetry breaking and new physics searches. We published 19 journal papers focusing on in-depth studies of proton structure and integration of advanced calculations from different areas of particle phenomenology: multi-loop calculations, accurate long-distance hadronic functions, and precise numerical programs. Methods for factorization of QCD cross sections were advancedmore » in order to develop new generations of CTEQ parton distribution functions (PDFs), CT10 and CT14. These distributions provide the core theoretical input for multi-loop perturbative calculations by LHC experimental collaborations. A novel ”PDF meta-analysis” technique was invented to streamline applications of PDFs in numerous LHC simulations and to combine PDFs from various groups using multivariate stochastic sampling of PDF parameters. The meta-analysis will help to bring the LHC perturbative calculations to the new level of accuracy, while reducing computational efforts. The work on parton distributions was complemented by development of advanced perturbative techniques to predict observables dependent on several momentum scales, including production of massive quarks and transverse momentum resummation at the next-to-next-to-leading order in QCD.« less

  14. A new numerical approach to solve Thomas-Fermi model of an atom using bio-inspired heuristics integrated with sequential quadratic programming.

    PubMed

    Raja, Muhammad Asif Zahoor; Zameer, Aneela; Khan, Aziz Ullah; Wazwaz, Abdul Majid

    2016-01-01

    In this study, a novel bio-inspired computing approach is developed to analyze the dynamics of nonlinear singular Thomas-Fermi equation (TFE) arising in potential and charge density models of an atom by exploiting the strength of finite difference scheme (FDS) for discretization and optimization through genetic algorithms (GAs) hybrid with sequential quadratic programming. The FDS procedures are used to transform the TFE differential equations into a system of nonlinear equations. A fitness function is constructed based on the residual error of constituent equations in the mean square sense and is formulated as the minimization problem. Optimization of parameters for the system is carried out with GAs, used as a tool for viable global search integrated with SQP algorithm for rapid refinement of the results. The design scheme is applied to solve TFE for five different scenarios by taking various step sizes and different input intervals. Comparison of the proposed results with the state of the art numerical and analytical solutions reveals that the worth of our scheme in terms of accuracy and convergence. The reliability and effectiveness of the proposed scheme are validated through consistently getting optimal values of statistical performance indices calculated for a sufficiently large number of independent runs to establish its significance.

  15. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE PAGES

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...

    2018-03-26

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  16. Evaluation of Neutron-induced Cross Sections and their Related Covariances with Physical Constraints

    NASA Astrophysics Data System (ADS)

    De Saint Jean, C.; Archier, P.; Privas, E.; Noguère, G.; Habert, B.; Tamagno, P.

    2018-02-01

    Nuclear data, along with numerical methods and the associated calculation schemes, continue to play a key role in reactor design, reactor core operating parameters calculations, fuel cycle management and criticality safety calculations. Due to the intensive use of Monte-Carlo calculations reducing numerical biases, the final accuracy of neutronic calculations increasingly depends on the quality of nuclear data used. This paper gives a broad picture of all ingredients treated by nuclear data evaluators during their analyses. After giving an introduction to nuclear data evaluation, we present implications of using the Bayesian inference to obtain evaluated cross sections and related uncertainties. In particular, a focus is made on systematic uncertainties appearing in the analysis of differential measurements as well as advantages and drawbacks one may encounter by analyzing integral experiments. The evaluation work is in general done independently in the resonance and in the continuum energy ranges giving rise to inconsistencies in evaluated files. For future evaluations on the whole energy range, we call attention to two innovative methods used to analyze several nuclear reaction models and impose constraints. Finally, we discuss suggestions for possible improvements in the evaluation process to master the quantification of uncertainties. These are associated with experiments (microscopic and integral), nuclear reaction theories and the Bayesian inference.

  17. Using Finite Element and Eigenmode Expansion Methods to Investigate the Periodic and Spectral Characteristic of Superstructure Fiber Bragg Gratings

    PubMed Central

    He, Yue-Jing; Hung, Wei-Chih; Lai, Zhe-Ping

    2016-01-01

    In this study, a numerical simulation method was employed to investigate and analyze superstructure fiber Bragg gratings (SFBGs) with five duty cycles (50%, 33.33%, 14.28%, 12.5%, and 10%). This study focuses on demonstrating the relevance between design period and spectral characteristics of SFBGs (in the form of graphics) for SFBGs of all duty cycles. Compared with complicated and hard-to-learn conventional coupled-mode theory, the result of the present study may assist beginner and expert designers in understanding the basic application aspects, optical characteristics, and design techniques of SFBGs, thereby indirectly lowering the physical concepts and mathematical skills required for entering the design field. To effectively improve the accuracy of overall computational performance and numerical calculations and to shorten the gap between simulation results and actual production, this study integrated a perfectly matched layer (PML), perfectly reflecting boundary (PRB), object meshing method (OMM), and boundary meshing method (BMM) into the finite element method (FEM) and eigenmode expansion method (EEM). The integrated method enables designers to easily and flexibly design optical fiber communication systems that conform to the specific spectral characteristic by using the simulation data in this paper, which includes bandwidth, number of channels, and band gap size. PMID:26861322

  18. A novel hybrid scattering order-dependent variance reduction method for Monte Carlo simulations of radiative transfer in cloudy atmosphere

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo

    2017-03-01

    We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.

  19. EMGD-FE: an open source graphical user interface for estimating isometric muscle forces in the lower limb using an EMG-driven model

    PubMed Central

    2014-01-01

    Background This paper describes the “EMG Driven Force Estimator (EMGD-FE)”, a Matlab® graphical user interface (GUI) application that estimates skeletal muscle forces from electromyography (EMG) signals. Muscle forces are obtained by numerically integrating a system of ordinary differential equations (ODEs) that simulates Hill-type muscle dynamics and that utilises EMG signals as input. In the current version, the GUI can estimate the forces of lower limb muscles executing isometric contractions. Muscles from other parts of the body can be tested as well, although no default values for model parameters are provided. To achieve accurate evaluations, EMG collection is performed simultaneously with torque measurement from a dynamometer. The computer application guides the user, step-by-step, to pre-process the raw EMG signals, create inputs for the muscle model, numerically integrate the ODEs and analyse the results. Results An example of the application’s functions is presented using the quadriceps femoris muscle. Individual muscle force estimations for the four components as well the knee isometric torque are shown. Conclusions The proposed GUI can estimate individual muscle forces from EMG signals of skeletal muscles. The estimation accuracy depends on several factors, including signal collection and modelling hypothesis issues. PMID:24708668

  20. Impact of Basal Hydrology Near Grounding Lines: Results from the MISMIP-3D and MISMIP+ Experiments Using the Community Ice Sheet Model

    NASA Astrophysics Data System (ADS)

    Leguy, G.; Lipscomb, W. H.; Asay-Davis, X.

    2017-12-01

    Ice sheets and ice shelves are linked by the transition zone, the region where the grounded ice lifts off the bedrock and begins to float. Adequate resolution of the transition zone is necessary for numerically accurate ice sheet-ice shelf simulations. In previous work we have shown that by using a simple parameterization of the basal hydrology, a smoother transition in basal water pressure between floating and grounded ice improves the numerical accuracy of a one-dimensional vertically integrated fixed-grid model. We used a set of experiments based on the Marine Ice Sheet Model Intercomparison Project (MISMIP) to show that reliable grounding-line dynamics at resolutions 1 km is achievable. In this presentation we use the Community Ice Sheet Model (CISM) to demonstrate how the representation of basal lubrication impacts three-dimensional models using the MISMIP-3D and MISMIP+ experiments. To this end we will compare three different Stokes approximations: the Shallow Shelf Approximation (SSA), a depth-integrated higher-order approximation, and the Blatter-Pattyn model. The results from our one-dimensional model carry over to the 3-D models; a resolution of 1 km (and in some cases 2 km) remains sufficient to accurately simulate grounding-line dynamics.

  1. EMGD-FE: an open source graphical user interface for estimating isometric muscle forces in the lower limb using an EMG-driven model.

    PubMed

    Menegaldo, Luciano Luporini; de Oliveira, Liliam Fernandes; Minato, Kin K

    2014-04-04

    This paper describes the "EMG Driven Force Estimator (EMGD-FE)", a Matlab® graphical user interface (GUI) application that estimates skeletal muscle forces from electromyography (EMG) signals. Muscle forces are obtained by numerically integrating a system of ordinary differential equations (ODEs) that simulates Hill-type muscle dynamics and that utilises EMG signals as input. In the current version, the GUI can estimate the forces of lower limb muscles executing isometric contractions. Muscles from other parts of the body can be tested as well, although no default values for model parameters are provided. To achieve accurate evaluations, EMG collection is performed simultaneously with torque measurement from a dynamometer. The computer application guides the user, step-by-step, to pre-process the raw EMG signals, create inputs for the muscle model, numerically integrate the ODEs and analyse the results. An example of the application's functions is presented using the quadriceps femoris muscle. Individual muscle force estimations for the four components as well the knee isometric torque are shown. The proposed GUI can estimate individual muscle forces from EMG signals of skeletal muscles. The estimation accuracy depends on several factors, including signal collection and modelling hypothesis issues.

  2. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  3. Improving the trust in results of numerical simulations and scientific data analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappello, Franck; Constantinescu, Emil; Hovland, Paul

    This white paper investigates several key aspects of the trust that a user can give to the results of numerical simulations and scientific data analytics. In this document, the notion of trust is related to the integrity of numerical simulations and data analytics applications. This white paper complements the DOE ASCR report on Cybersecurity for Scientific Computing Integrity by (1) exploring the sources of trust loss; (2) reviewing the definitions of trust in several areas; (3) providing numerous cases of result alteration, some of them leading to catastrophic failures; (4) examining the current notion of trust in numerical simulation andmore » scientific data analytics; (5) providing a gap analysis; and (6) suggesting two important research directions and their respective research topics. To simplify the presentation without loss of generality, we consider that trust in results can be lost (or the results’ integrity impaired) because of any form of corruption happening during the execution of the numerical simulation or the data analytics application. In general, the sources of such corruption are threefold: errors, bugs, and attacks. Current applications are already using techniques to deal with different types of corruption. However, not all potential corruptions are covered by these techniques. We firmly believe that the current level of trust that a user has in the results is at least partially founded on ignorance of this issue or the hope that no undetected corruptions will occur during the execution. This white paper explores the notion of trust and suggests recommendations for developing a more scientifically grounded notion of trust in numerical simulation and scientific data analytics. We first formulate the problem and show that it goes beyond previous questions regarding the quality of results such as V&V, uncertainly quantification, and data assimilation. We then explore the complexity of this difficult problem, and we sketch complementary general approaches to address it. This paper does not focus on the trust that the execution will actually complete. The product of simulation or of data analytic executions is the final element of a potentially long chain of transformations, where each stage has the potential to introduce harmful corruptions. These corruptions may produce results that deviate from the user-expected accuracy without notifying the user of this deviation. There are many potential sources of corruption before and during the execution; consequently, in this white paper we do not focus on the protection of the end result after the execution.« less

  4. Broadband EIT borehole measurements with high phase accuracy using numerical corrections of electromagnetic coupling effects

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Zimmermann, E.; Huisman, J. A.; Treichel, A.; Wolters, B.; van Waasen, S.; Kemna, A.

    2013-08-01

    Electrical impedance tomography (EIT) is gaining importance in the field of geophysics and there is increasing interest for accurate borehole EIT measurements in a broad frequency range (mHz to kHz) in order to study subsurface properties. To characterize weakly polarizable soils and sediments with EIT, high phase accuracy is required. Typically, long electrode cables are used for borehole measurements. However, this may lead to undesired electromagnetic coupling effects associated with the inductive coupling between the double wire pairs for current injection and potential measurement and the capacitive coupling between the electrically conductive shield of the cable and the electrically conductive environment surrounding the electrode cables. Depending on the electrical properties of the subsurface and the measured transfer impedances, both coupling effects can cause large phase errors that have typically limited the frequency bandwidth of field EIT measurements to the mHz to Hz range. The aim of this paper is to develop numerical corrections for these phase errors. To this end, the inductive coupling effect was modeled using electronic circuit models, and the capacitive coupling effect was modeled by integrating discrete capacitances in the electrical forward model describing the EIT measurement process. The correction methods were successfully verified with measurements under controlled conditions in a water-filled rain barrel, where a high phase accuracy of 0.8 mrad in the frequency range up to 10 kHz was achieved. The corrections were also applied to field EIT measurements made using a 25 m long EIT borehole chain with eight electrodes and an electrode separation of 1 m. The results of a 1D inversion of these measurements showed that the correction methods increased the measurement accuracy considerably. It was concluded that the proposed correction methods enlarge the bandwidth of the field EIT measurement system, and that accurate EIT measurements can now be made in the mHz to kHz frequency range. This increased accuracy in the kHz range will allow a more accurate field characterization of the complex electrical conductivity of soils and sediments, which may lead to the improved estimation of saturated hydraulic conductivity from electrical properties. Although the correction methods have been developed for a custom-made EIT system, they also have potential to improve the phase accuracy of EIT measurements made with commercial systems relying on multicore cables.

  5. Numerical considerations for Lagrangian stochastic dispersion models: Eliminating rogue trajectories, and the importance of numerical accuracy

    USDA-ARS?s Scientific Manuscript database

    When Lagrangian stochastic models for turbulent dispersion are applied to complex flows, some type of ad hoc intervention is almost always necessary to eliminate unphysical behavior in the numerical solution. This paper discusses numerical considerations when solving the Langevin-based particle velo...

  6. Exploiting Superconvergence in Discontinuous Galerkin Methods for Improved Time-Stepping and Visualization

    DTIC Science & Technology

    2016-09-08

    Accuracy Conserving (SIAC) filter when applied to nonuniform meshes; 2) Theoretically and numerical demonstration of the 2k+1 order accuracy of the SIAC...Establishing a more theoretical and numerical understanding of a computationally efficient scaling for the SIAC filter for nonuniform meshes [7]; 2...Li, “SIAC Filtering of DG Methods – Boundary and Nonuniform Mesh”, International Conference on Spectral and Higher Order Methods (ICOSAHOM

  7. tran-SAS v1.0: a numerical model to compute catchment-scale hydrologic transport using StorAge Selection functions

    NASA Astrophysics Data System (ADS)

    Benettin, Paolo; Bertuzzo, Enrico

    2018-04-01

    This paper presents the tran-SAS package, which includes a set of codes to model solute transport and water residence times through a hydrological system. The model is based on a catchment-scale approach that aims at reproducing the integrated response of the system at one of its outlets. The codes are implemented in MATLAB and are meant to be easy to edit, so that users with minimal programming knowledge can adapt them to the desired application. The problem of large-scale solute transport has both theoretical and practical implications. On the one side, the ability to represent the ensemble of water flow trajectories through a heterogeneous system helps unraveling streamflow generation processes and allows us to make inferences on plant-water interactions. On the other side, transport models are a practical tool that can be used to estimate the persistence of solutes in the environment. The core of the package is based on the implementation of an age master equation (ME), which is solved using general StorAge Selection (SAS) functions. The age ME is first converted into a set of ordinary differential equations, each addressing the transport of an individual precipitation input through the catchment, and then it is discretized using an explicit numerical scheme. Results show that the implementation is efficient and allows the model to run in short times. The numerical accuracy is critically evaluated and it is shown to be satisfactory in most cases of hydrologic interest. Additionally, a higher-order implementation is provided within the package to evaluate and, if necessary, to improve the numerical accuracy of the results. The codes can be used to model streamflow age and solute concentration, but a number of additional outputs can be obtained by editing the codes to further advance the ability to understand and model catchment transport processes.

  8. 3-D numerical simulations of earthquake ground motion in sedimentary basins: testing accuracy through stringent models

    NASA Astrophysics Data System (ADS)

    Chaljub, Emmanuel; Maufroy, Emeline; Moczo, Peter; Kristek, Jozef; Hollender, Fabrice; Bard, Pierre-Yves; Priolo, Enrico; Klin, Peter; de Martin, Florent; Zhang, Zhenguo; Zhang, Wei; Chen, Xiaofei

    2015-04-01

    Differences between 3-D numerical predictions of earthquake ground motion in the Mygdonian basin near Thessaloniki, Greece, led us to define four canonical stringent models derived from the complex realistic 3-D model of the Mygdonian basin. Sediments atop an elastic bedrock are modelled in the 1D-sharp and 1D-smooth models using three homogeneous layers and smooth velocity distribution, respectively. The 2D-sharp and 2D-smooth models are extensions of the 1-D models to an asymmetric sedimentary valley. In all cases, 3-D wavefields include strongly dispersive surface waves in the sediments. We compared simulations by the Fourier pseudo-spectral method (FPSM), the Legendre spectral-element method (SEM) and two formulations of the finite-difference method (FDM-S and FDM-C) up to 4 Hz. The accuracy of individual solutions and level of agreement between solutions vary with type of seismic waves and depend on the smoothness of the velocity model. The level of accuracy is high for the body waves in all solutions. However, it strongly depends on the discrete representation of the material interfaces (at which material parameters change discontinuously) for the surface waves in the sharp models. An improper discrete representation of the interfaces can cause inaccurate numerical modelling of surface waves. For all the numerical methods considered, except SEM with mesh of elements following the interfaces, a proper implementation of interfaces requires definition of an effective medium consistent with the interface boundary conditions. An orthorhombic effective medium is shown to significantly improve accuracy and preserve the computational efficiency of modelling. The conclusions drawn from the analysis of the results of the canonical cases greatly help to explain differences between numerical predictions of ground motion in realistic models of the Mygdonian basin. We recommend that any numerical method and code that is intended for numerical prediction of earthquake ground motion should be verified through stringent models that would make it possible to test the most important aspects of accuracy.

  9. Hybrid RANS-LES using high order numerical methods

    NASA Astrophysics Data System (ADS)

    Henry de Frahan, Marc; Yellapantula, Shashank; Vijayakumar, Ganesh; Knaus, Robert; Sprague, Michael

    2017-11-01

    Understanding the impact of wind turbine wake dynamics on downstream turbines is particularly important for the design of efficient wind farms. Due to their tractable computational cost, hybrid RANS/LES models are an attractive framework for simulating separation flows such as the wake dynamics behind a wind turbine. High-order numerical methods can be computationally efficient and provide increased accuracy in simulating complex flows. In the context of LES, high-order numerical methods have shown some success in predictions of turbulent flows. However, the specifics of hybrid RANS-LES models, including the transition region between both modeling frameworks, pose unique challenges for high-order numerical methods. In this work, we study the effect of increasing the order of accuracy of the numerical scheme in simulations of canonical turbulent flows using RANS, LES, and hybrid RANS-LES models. We describe the interactions between filtering, model transition, and order of accuracy and their effect on turbulence quantities such as kinetic energy spectra, boundary layer evolution, and dissipation rate. This work was funded by the U.S. Department of Energy, Exascale Computing Project, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.

  10. Integrated Wind Power Planning Tool

    NASA Astrophysics Data System (ADS)

    Rosgaard, M. H.; Giebel, G.; Nielsen, T. S.; Hahmann, A.; Sørensen, P.; Madsen, H.

    2012-04-01

    This poster presents the current state of the public service obligation (PSO) funded project PSO 10464, with the working title "Integrated Wind Power Planning Tool". The project commenced October 1, 2011, and the goal is to integrate a numerical weather prediction (NWP) model with purely statistical tools in order to assess wind power fluctuations, with focus on long term power system planning for future wind farms as well as short term forecasting for existing wind farms. Currently, wind power fluctuation models are either purely statistical or integrated with NWP models of limited resolution. With regard to the latter, one such simulation tool has been developed at the Wind Energy Division, Risø DTU, intended for long term power system planning. As part of the PSO project the inferior NWP model used at present will be replaced by the state-of-the-art Weather Research & Forecasting (WRF) model. Furthermore, the integrated simulation tool will be improved so it can handle simultaneously 10-50 times more turbines than the present ~ 300, as well as additional atmospheric parameters will be included in the model. The WRF data will also be input for a statistical short term prediction model to be developed in collaboration with ENFOR A/S; a danish company that specialises in forecasting and optimisation for the energy sector. This integrated prediction model will allow for the description of the expected variability in wind power production in the coming hours to days, accounting for its spatio-temporal dependencies, and depending on the prevailing weather conditions defined by the WRF output. The output from the integrated prediction tool constitute scenario forecasts for the coming period, which can then be fed into any type of system model or decision making problem to be solved. The high resolution of the WRF results loaded into the integrated prediction model will ensure a high accuracy data basis is available for use in the decision making process of the Danish transmission system operator, and the need for high accuracy predictions will only increase over the next decade as Denmark approaches the goal of 50% wind power based electricity in 2020, from the current 20%.

  11. Analytical investigations of seismic responses for reinforced concrete bridge columns subjected to strong near-fault ground motion

    NASA Astrophysics Data System (ADS)

    Su, Chin-Kuo; Sung, Yu-Chi; Chang, Shuenn-Yih; Huang, Chao-Hsun

    2007-09-01

    Strong near-fault ground motion, usually caused by the fault-rupture and characterized by a pulse-like velocity-wave form, often causes dramatic instantaneous seismic energy (Jadhav and Jangid 2006). Some reinforced concrete (RC) bridge columns, even those built according to ductile design principles, were damaged in the 1999 Chi-Chi earthquake. Thus, it is very important to evaluate the seismic response of a RC bridge column to improve its seismic design and prevent future damage. Nonlinear time history analysis using step-by-step integration is capable of tracing the dynamic response of a structure during the entire vibration period and is able to accommodate the pulsing wave form. However, the accuracy of the numerical results is very sensitive to the modeling of the nonlinear load-deformation relationship of the structural member. FEMA 273 and ATC-40 provide the modeling parameters for structural nonlinear analyses of RC beams and RC columns. They use three parameters to define the plastic rotation angles and a residual strength ratio to describe the nonlinear load-deformation relationship of an RC member. Structural nonlinear analyses are performed based on these parameters. This method provides a convenient way to obtain the nonlinear seismic responses of RC structures. However, the accuracy of the numerical solutions might be further improved. For this purpose, results from a previous study on modeling of the static pushover analyses for RC bridge columns (Sung et al. 2005) is adopted for the nonlinear time history analysis presented herein to evaluate the structural responses excited by a near-fault ground motion. To ensure the reliability of this approach, the numerical results were compared to experimental results. The results confirm that the proposed approach is valid.

  12. An efficient algorithm for orbital evolution of space debris

    NASA Astrophysics Data System (ADS)

    Abdel-Aziz, Y.; Abd El-Salam, F.

    More than four decades of space exploration have led to accumulation of significant quantities of debris around the Earth. These objects range in size from a tiny piece of junk to a large inoperable satellite, although these objects that have small size they have high are-to-mass ratios, and consequently their orbits are strongly influenced by solar radiation pressure and atmospheric drag. So the increasing population of space debris object in the LEO, MEO and GEO present growing with time, serious hazard for the survival of operating spacecrafts, particularly satellites and astronomical observatories. Since the average collision velocity between any spacecraft orbiting in the LOE and debris objects is about 10 km/s and about 3 km/s in the GEO. Space debris may significantly disturb any satellite operations or cause catastrophic damage to a spacecraft itself. Applying different shielding techniques spacecraft my be protected against impacts of space debris with diameters smaller than 1 cm. For larger debris objects, only one effective method to avoid catastrophic consequence of collision is a manoeuvre that will change the spacecraft orbit. The necessary conditions in this case is to evaluate and predict future positions of the spacecraft and space debris with sufficient accuray. Numerical integration of equations of motion are used until now. Existing analytical methods can solve this problem only with low accuracy. Difficulties are caused mainly by the lack of satisfying analytical solution of the resonance problem for geosynchronous orbit as well as from the lack of efficient analytical theory combining luni-solar perturbation and solar radiation pressure with geopotential attraction. Numerical integration is time consuming in some cases, and then for qualitative analysis of the satellite's and debris's motion it is necessary to apply analytical solution. This is the reason for searching for an accurate model to evaluate the orbital position of the operating satellites and space debris. The present paper developes a second order theory of perturbations (in the sense of the Hori-Lie perturbation method), that include the geopotential effect, luni-solar perturbations, solar radiation pressure and atmospheric drag. Resonance and very long period perturbations are modeled with the use of semi-secular terms for a short time span predictions. We present a comparision of our analytical solution with numerical integration of motion for chosen artificial satellites at (Low, MEO, GEO), also for different spase debris objets with different are-to-mass ratios showing good accuracy of the theory.

  13. On the accuracy of ERS-1 orbit predictions

    NASA Technical Reports Server (NTRS)

    Koenig, Rolf; Li, H.; Massmann, Franz-Heinrich; Raimondo, J. C.; Rajasenan, C.; Reigber, C.

    1993-01-01

    Since the launch of ERS-1, the D-PAF (German Processing and Archiving Facility) provides regularly orbit predictions for the worldwide SLR (Satellite Laser Ranging) tracking network. The weekly distributed orbital elements are so called tuned IRV's and tuned SAO-elements. The tuning procedure, designed to improve the accuracy of the recovery of the orbit at the stations, is discussed based on numerical results. This shows that tuning of elements is essential for ERS-1 with the currently applied tracking procedures. The orbital elements are updated by daily distributed time bias functions. The generation of the time bias function is explained. Problems and numerical results are presented. The time bias function increases the prediction accuracy considerably. Finally, the quality assessment of ERS-1 orbit predictions is described. The accuracy is compiled for about 250 days since launch. The average accuracy lies in the range of 50-100 ms and has considerably improved.

  14. Fast methods to numerically integrate the Reynolds equation for gas fluid films

    NASA Technical Reports Server (NTRS)

    Dimofte, Florin

    1992-01-01

    The alternating direction implicit (ADI) method is adopted, modified, and applied to the Reynolds equation for thin, gas fluid films. An efficient code is developed to predict both the steady-state and dynamic performance of an aerodynamic journal bearing. An alternative approach is shown for hybrid journal gas bearings by using Liebmann's iterative solution (LIS) for elliptic partial differential equations. The results are compared with known design criteria from experimental data. The developed methods show good accuracy and very short computer running time in comparison with methods based on an inverting of a matrix. The computer codes need a small amount of memory and can be run on either personal computers or on mainframe systems.

  15. Attitude/attitude-rate estimation from GPS differential phase measurements using integrated-rate parameters

    NASA Technical Reports Server (NTRS)

    Oshman, Yaakov; Markley, Landis

    1998-01-01

    A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.

  16. A unified monolithic approach for multi-fluid flows and fluid-structure interaction using the Particle Finite Element Method with fixed mesh

    NASA Astrophysics Data System (ADS)

    Becker, P.; Idelsohn, S. R.; Oñate, E.

    2015-06-01

    This paper describes a strategy to solve multi-fluid and fluid-structure interaction (FSI) problems using Lagrangian particles combined with a fixed finite element (FE) mesh. Our approach is an extension of the fluid-only PFEM-2 (Idelsohn et al., Eng Comput 30(2):2-2, 2013; Idelsohn et al., J Numer Methods Fluids, 2014) which uses explicit integration over the streamlines to improve accuracy. As a result, the convective term does not appear in the set of equations solved on the fixed mesh. Enrichments in the pressure field are used to improve the description of the interface between phases.

  17. Construction and comparison of parallel implicit kinetic solvers in three spatial dimensions

    NASA Astrophysics Data System (ADS)

    Titarev, Vladimir; Dumbser, Michael; Utyuzhnikov, Sergey

    2014-01-01

    The paper is devoted to the further development and systematic performance evaluation of a recent deterministic framework Nesvetay-3D for modelling three-dimensional rarefied gas flows. Firstly, a review of the existing discretization and parallelization strategies for solving numerically the Boltzmann kinetic equation with various model collision integrals is carried out. Secondly, a new parallelization strategy for the implicit time evolution method is implemented which improves scaling on large CPU clusters. Accuracy and scalability of the methods are demonstrated on a pressure-driven rarefied gas flow through a finite-length circular pipe as well as an external supersonic flow over a three-dimensional re-entry geometry of complicated aerodynamic shape.

  18. A lift formula applied to low-Reynolds-number unsteady flows

    NASA Astrophysics Data System (ADS)

    Wang, Shizhao; Zhang, Xing; He, Guowei; Liu, Tianshu

    2013-09-01

    A lift formula for a wing in a rectangular control volume is given in a very simple and physically lucid form, providing a rational foundation for calculation of the lift of a flapping wing in highly unsteady and separated flows at low Reynolds numbers. Direct numerical simulations on the stationary and flapping two-dimensional flat plate and rectangular flat-plate wing are conducted to assess the accuracy of the lift formula along with the classical Kutta-Joukowski theorem. In particular, the Lamb vector integral for the vortex force and the acceleration term of fluid for the unsteady inertial effect are evaluated as the main contributions to the unsteady lift generation of a flapping wing.

  19. A New Code SORD for Simulation of Polarized Light Scattering in the Earth Atmosphere

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey; Lyapustin, Alexei; Sinyuk, Aliaksandr; Holben, Brent

    2016-01-01

    We report a new publicly available radiative transfer (RT) code for numerical simulation of polarized light scattering in plane-parallel atmosphere of the Earth. Using 44 benchmark tests, we prove high accuracy of the new RT code, SORD (Successive ORDers of scattering). We describe capabilities of SORD and show run time for each test on two different machines. At present, SORD is supposed to work as part of the Aerosol Robotic NETwork (AERONET) inversion algorithm. For natural integration with the AERONET software, SORD is coded in Fortran 90/95. The code is available by email request from the corresponding (first) author or from ftp://climate1.gsfc.nasa.gov/skorkin/SORD/.

  20. The method of fundamental solutions for computing acoustic interior transmission eigenvalues

    NASA Astrophysics Data System (ADS)

    Kleefeld, Andreas; Pieronek, Lukas

    2018-03-01

    We analyze the method of fundamental solutions (MFS) in two different versions with focus on the computation of approximate acoustic interior transmission eigenvalues in 2D for homogeneous media. Our approach is mesh- and integration free, but suffers in general from the ill-conditioning effects of the discretized eigenoperator, which we could then successfully balance using an approved stabilization scheme. Our numerical examples cover many of the common scattering objects and prove to be very competitive in accuracy with the standard methods for PDE-related eigenvalue problems. We finally give an approximation analysis for our framework and provide error estimates, which bound interior transmission eigenvalue deviations in terms of some generalized MFS output.

  1. Experimental and numerical characterization of the sound pressure in standing wave acoustic levitators

    NASA Astrophysics Data System (ADS)

    Stindt, A.; Andrade, M. A. B.; Albrecht, M.; Adamowski, J. C.; Panne, U.; Riedel, J.

    2014-01-01

    A novel method for predictions of the sound pressure distribution in acoustic levitators is based on a matrix representation of the Rayleigh integral. This method allows for a fast calculation of the acoustic field within the resonator. To make sure that the underlying assumptions and simplifications are justified, this approach was tested by a direct comparison to experimental data. The experimental sound pressure distributions were recorded by high spatially resolved frequency selective microphone scanning. To emphasize the general applicability of the two approaches, the comparative studies were conducted for four different resonator geometries. In all cases, the results show an excellent agreement, demonstrating the accuracy of the matrix method.

  2. Explicit frequency equations of free vibration of a nonlocal Timoshenko beam with surface effects

    NASA Astrophysics Data System (ADS)

    Zhao, Hai-Sheng; Zhang, Yao; Lie, Seng-Tjhen

    2018-02-01

    Considerations of nonlocal elasticity and surface effects in micro- and nanoscale beams are both important for the accurate prediction of natural frequency. In this study, the governing equation of a nonlocal Timoshenko beam with surface effects is established by taking into account three types of boundary conditions: hinged-hinged, clamped-clamped and clamped-hinged ends. For a hinged-hinged beam, an exact and explicit natural frequency equation is obtained. However, for clamped-clamped and clamped-hinged beams, the solutions of corresponding frequency equations must be determined numerically due to their transcendental nature. Hence, the Fredholm integral equation approach coupled with a curve fitting method is employed to derive the approximate fundamental frequency equations, which can predict the frequency values with high accuracy. In short, explicit frequency equations of the Timoshenko beam for three types of boundary conditions are proposed to exhibit directly the dependence of the natural frequency on the nonlocal elasticity, surface elasticity, residual surface stress, shear deformation and rotatory inertia, avoiding the complicated numerical computation.

  3. Benchmark solution for the Spencer-Lewis equation of electron transport theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.D.

    As integrated circuits become smaller, the shielding of these sensitive components against penetrating electrons becomes extremely critical. Monte Carlo methods have traditionally been the method of choice in shielding evaluations primarily because they can incorporate a wide variety of relevant physical processes. Recently, however, as a result of a more accurate numerical representation of the highly forward peaked scattering process, S/sub n/ methods for one-dimensional problems have been shown to be at least as cost-effective in comparison with Monte Carlo methods. With the development of these deterministic methods for electron transport, a need has arisen to assess the accuracy ofmore » proposed numerical algorithms and to ensure their proper coding. It is the purpose of this presentation to develop a benchmark to the Spencer-Lewis equation describing the transport of energetic electrons in solids. The solution will take advantage of the correspondence between the Spencer-Lewis equation and the transport equation describing one-group time-dependent neutron transport.« less

  4. Comparison of the convolution quadrature method and enhanced inverse FFT with application in elastodynamic boundary element method

    NASA Astrophysics Data System (ADS)

    Schanz, Martin; Ye, Wenjing; Xiao, Jinyou

    2016-04-01

    Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.

  5. Aerodynamic influence coefficient method using singularity splines

    NASA Technical Reports Server (NTRS)

    Mercer, J. E.; Weber, J. A.; Lesferd, E. P.

    1974-01-01

    A numerical lifting surface formulation, including computed results for planar wing cases is presented. This formulation, referred to as the vortex spline scheme, combines the adaptability to complex shapes offered by paneling schemes with the smoothness and accuracy of loading function methods. The formulation employes a continuous distribution of singularity strength over a set of panels on a paneled wing. The basic distributions are independent, and each satisfied all the continuity conditions required of the final solution. These distributions are overlapped both spanwise and chordwise. Boundary conditions are satisfied in a least square error sense over the surface using a finite summing technique to approximate the integral. The current formulation uses the elementary horseshoe vortex as the basic singularity and is therefore restricted to linearized potential flow. As part of the study, a non planar development was considered, but the numerical evaluation of the lifting surface concept was restricted to planar configurations. Also, a second order sideslip analysis based on an asymptotic expansion was investigated using the singularity spline formulation.

  6. Optimal reorientation of asymmetric underactuated spacecraft using differential flatness and receding horizon control

    NASA Astrophysics Data System (ADS)

    Cai, Wei-wei; Yang, Le-ping; Zhu, Yan-wei

    2015-01-01

    This paper presents a novel method integrating nominal trajectory optimization and tracking for the reorientation control of an underactuated spacecraft with only two available control torque inputs. By employing a pseudo input along the uncontrolled axis, the flatness property of a general underactuated spacecraft is extended explicitly, by which the reorientation trajectory optimization problem is formulated into the flat output space with all the differential constraints eliminated. Ultimately, the flat output optimization problem is transformed into a nonlinear programming problem via the Chebyshev pseudospectral method, which is improved by the conformal map and barycentric rational interpolation techniques to overcome the side effects of the differential matrix's ill-conditions on numerical accuracy. Treating the trajectory tracking control as a state regulation problem, we develop a robust closed-loop tracking control law using the receding-horizon control method, and compute the feedback control at each control cycle rapidly via the differential transformation method. Numerical simulation results show that the proposed control scheme is feasible and effective for the reorientation maneuver.

  7. Space Weather Forecasting Operational Needs: A view from NOAA/SWPC

    NASA Astrophysics Data System (ADS)

    Biesecker, D. A.; Onsager, T. G.; Rutledge, R.

    2017-12-01

    The gaps in space weather forecasting are many. From long lead time forecasts, to accurate warnings with lead time to take action, there is plenty of room for improvement. Significant numbers of new observations would improve this picture, but it's also important to recognize the value of numerical modeling. The obvious interplanetary mission concepts that would be ideal would be 1) to measure the in-situ solar wind along the entire Sun-Earth line from as near to the Sun as possible all the way to Earth 2) a string of spacecraft in 1 AU heliocentric orbits making in-situ measurements as well as remote-sensing observations of the Sun, corona, and heliosphere. Even partially achieving these ideals would benefit space weather services, improving lead time and providing greater accuracy further into the future. The observations alone would improve forecasting. However, integrating these data into numerical models, as boundary conditions or via data assimilation, would provide the greatest improvements.

  8. A numerical method for computing unsteady 2-D boundary layer flows

    NASA Technical Reports Server (NTRS)

    Krainer, Andreas

    1988-01-01

    A numerical method for computing unsteady two-dimensional boundary layers in incompressible laminar and turbulent flows is described and applied to a single airfoil changing its incidence angle in time. The solution procedure adopts a first order panel method with a simple wake model to solve for the inviscid part of the flow, and an implicit finite difference method for the viscous part of the flow. Both procedures integrate in time in a step-by-step fashion, in the course of which each step involves the solution of the elliptic Laplace equation and the solution of the parabolic boundary layer equations. The Reynolds shear stress term of the boundary layer equations is modeled by an algebraic eddy viscosity closure. The location of transition is predicted by an empirical data correlation originating from Michel. Since transition and turbulence modeling are key factors in the prediction of viscous flows, their accuracy will be of dominant influence to the overall results.

  9. Statistics of concentrations due to single air pollution sources to be applied in numerical modelling of pollutant dispersion

    NASA Astrophysics Data System (ADS)

    Tumanov, Sergiu

    A test of goodness of fit based on rank statistics was applied to prove the applicability of the Eggenberger-Polya discrete probability law to hourly SO 2-concentrations measured in the vicinity of single sources. With this end in view, the pollutant concentration was considered an integral quantity which may be accepted if one properly chooses the unit of measurement (in this case μg m -3) and if account is taken of the limited accuracy of measurements. The results of the test being satisfactory, even in the range of upper quantiles, the Eggenberger-Polya law was used in association with numerical modelling to estimate statistical parameters, e.g. quantiles, cumulative probabilities of threshold concentrations to be exceeded, and so on, in the grid points of a network covering the area of interest. This only needs accurate estimations of means and variances of the concentration series which can readily be obtained through routine air pollution dispersion modelling.

  10. Eighth-order explicit two-step hybrid methods with symmetric nodes and weights for solving orbital and oscillatory IVPs

    NASA Astrophysics Data System (ADS)

    Franco, J. M.; Rández, L.

    The construction of new two-step hybrid (TSH) methods of explicit type with symmetric nodes and weights for the numerical integration of orbital and oscillatory second-order initial value problems (IVPs) is analyzed. These methods attain algebraic order eight with a computational cost of six or eight function evaluations per step (it is one of the lowest costs that we know in the literature) and they are optimal among the TSH methods in the sense that they reach a certain order of accuracy with minimal cost per step. The new TSH schemes also have high dispersion and dissipation orders (greater than 8) in order to be adapted to the solution of IVPs with oscillatory solutions. The numerical experiments carried out with several orbital and oscillatory problems show that the new eighth-order explicit TSH methods are more efficient than other standard TSH or Numerov-type methods proposed in the scientific literature.

  11. A solution to neural field equations by a recurrent neural network method

    NASA Astrophysics Data System (ADS)

    Alharbi, Abir

    2012-09-01

    Neural field equations (NFE) are used to model the activity of neurons in the brain, it is introduced from a single neuron 'integrate-and-fire model' starting point. The neural continuum is spatially discretized for numerical studies, and the governing equations are modeled as a system of ordinary differential equations. In this article the recurrent neural network approach is used to solve this system of ODEs. This consists of a technique developed by combining the standard numerical method of finite-differences with the Hopfield neural network. The architecture of the net, energy function, updating equations, and algorithms are developed for the NFE model. A Hopfield Neural Network is then designed to minimize the energy function modeling the NFE. Results obtained from the Hopfield-finite-differences net show excellent performance in terms of accuracy and speed. The parallelism nature of the Hopfield approaches may make them easier to implement on fast parallel computers and give them the speed advantage over the traditional methods.

  12. Event and Apparent Horizon Finders for 3 + 1 Numerical Relativity.

    PubMed

    Thornburg, Jonathan

    2007-01-01

    Event and apparent horizons are key diagnostics for the presence and properties of black holes. In this article I review numerical algorithms and codes for finding event and apparent horizons in numerically-computed spacetimes, focusing on calculations done using the 3 + 1 ADM formalism. The event horizon of an asymptotically-flat spacetime is the boundary between those events from which a future-pointing null geodesic can reach future null infinity and those events from which no such geodesic exists. The event horizon is a (continuous) null surface in spacetime. The event horizon is defined nonlocally in time : it is a global property of the entire spacetime and must be found in a separate post-processing phase after all (or at least the nonstationary part) of spacetime has been numerically computed. There are three basic algorithms for finding event horizons, based on integrating null geodesics forwards in time, integrating null geodesics backwards in time, and integrating null surfaces backwards in time. The last of these is generally the most efficient and accurate. In contrast to an event horizon, an apparent horizon is defined locally in time in a spacelike slice and depends only on data in that slice, so it can be (and usually is) found during the numerical computation of a spacetime. A marginally outer trapped surface (MOTS) in a slice is a smooth closed 2-surface whose future-pointing outgoing null geodesics have zero expansion Θ. An apparent horizon is then defined as a MOTS not contained in any other MOTS. The MOTS condition is a nonlinear elliptic partial differential equation (PDE) for the surface shape, containing the ADM 3-metric, its spatial derivatives, and the extrinsic curvature as coefficients. Most "apparent horizon" finders actually find MOTSs. There are a large number of apparent horizon finding algorithms, with differing trade-offs between speed, robustness, accuracy, and ease of programming. In axisymmetry, shooting algorithms work well and are fairly easy to program. In slices with no continuous symmetries, spectral integral-iteration algorithms and elliptic-PDE algorithms are fast and accurate, but require good initial guesses to converge. In many cases, Schnetter's "pretracking" algorithm can greatly improve an elliptic-PDE algorithm's robustness. Flow algorithms are generally quite slow but can be very robust in their convergence. Minimization methods are slow and relatively inaccurate in the context of a finite differencing simulation, but in a spectral code they can be relatively faster and more robust.

  13. An efficient blocking M2L translation for low-frequency fast multipole method in three dimensions

    NASA Astrophysics Data System (ADS)

    Takahashi, Toru; Shimba, Yuta; Isakari, Hiroshi; Matsumoto, Toshiro

    2016-05-01

    We propose an efficient scheme to perform the multipole-to-local (M2L) translation in the three-dimensional low-frequency fast multipole method (LFFMM). Our strategy is to combine a group of matrix-vector products associated with M2L translation into a matrix-matrix product in order to diminish the memory traffic. For this purpose, we first developed a grouping method (termed as internal blocking) based on the congruent transformations (rotational and reflectional symmetries) of M2L-translators for each target box in the FMM hierarchy (adaptive octree). Next, we considered another method of grouping (termed as external blocking) that was able to handle M2L translations for multiple target boxes collectively by using the translational invariance of the M2L translation. By combining these internal and external blockings, the M2L translation can be performed efficiently whilst preservingthe numerical accuracy exactly. We assessed the proposed blocking scheme numerically and applied it to the boundary integral equation method to solve electromagnetic scattering problems for perfectly electrical conductor. From the numerical results, it was found that the proposed M2L scheme achieved a few times speedup compared to the non-blocking scheme.

  14. Comprehensive gravitational modeling of the vertical cylindrical prism by Gauss-Legendre quadrature integration

    NASA Astrophysics Data System (ADS)

    Asgharzadeh, M. F.; Hashemi, H.; von Frese, R. RB

    2018-01-01

    Forward modeling is the basis of gravitational anomaly inversion that is widely applied to map subsurface mass variations. This study uses numerical least-squares Gauss-Legendre quadrature (GLQ) integration to evaluate the gravitational potential, anomaly and gradient components of the vertical cylindrical prism element. These results, in turn, may be integrated to accurately model the complete gravitational effects of fluid bearing rock formations and other vertical cylinder-like geological bodies with arbitrary variations in shape and density. Comparing the GLQ gravitational effects of uniform density, vertical circular cylinders against the effects calculated by a number of other methods illustrates the veracity of the GLQ modeling method and the accuracy limitations of the other methods. Geological examples include modeling the gravitational effects of a formation washout to help map azimuthal variations of the formation's bulk densities around the borehole wall. As another application, the gravitational effects of a seismically and gravimetrically imaged salt dome within the Laurentian Basin are evaluated for the velocity, density and geometric properties of the Basin's sedimentary formations.

  15. Static and free-vibration analyses of cracks in thin-shell structures based on an isogeometric-meshfree coupling approach

    NASA Astrophysics Data System (ADS)

    Nguyen-Thanh, Nhon; Li, Weidong; Zhou, Kun

    2018-03-01

    This paper develops a coupling approach which integrates the meshfree method and isogeometric analysis (IGA) for static and free-vibration analyses of cracks in thin-shell structures. In this approach, the domain surrounding the cracks is represented by the meshfree method while the rest domain is meshed by IGA. The present approach is capable of preserving geometry exactness and high continuity of IGA. The local refinement is achieved by adding the nodes along the background cells in the meshfree domain. Moreover, the equivalent domain integral technique for three-dimensional problems is derived from the additional Kirchhoff-Love theory to compute the J-integral for the thin-shell model. The proposed approach is able to address the problems involving through-the-thickness cracks without using additional rotational degrees of freedom, which facilitates the enrichment strategy for crack tips. The crack tip enrichment effects and the stress distribution and displacements around the crack tips are investigated. Free vibrations of cracks in thin shells are also analyzed. Numerical examples are presented to demonstrate the accuracy and computational efficiency of the coupling approach.

  16. A highly accurate boundary integral equation method for surfactant-laden drops in 3D

    NASA Astrophysics Data System (ADS)

    Sorgentone, Chiara; Tornberg, Anna-Karin

    2018-05-01

    The presence of surfactants alters the dynamics of viscous drops immersed in an ambient viscous fluid. This is specifically true at small scales, such as in applications of droplet based microfluidics, where the interface dynamics become of increased importance. At such small scales, viscous forces dominate and inertial effects are often negligible. Considering Stokes flow, a numerical method based on a boundary integral formulation is presented for simulating 3D drops covered by an insoluble surfactant. The method is able to simulate drops with different viscosities and close interactions, automatically controlling the time step size and maintaining high accuracy also when substantial drop deformation appears. To achieve this, the drop surfaces as well as the surfactant concentration on each surface are represented by spherical harmonics expansions. A novel reparameterization method is introduced to ensure a high-quality representation of the drops also under deformation, specialized quadrature methods for singular and nearly singular integrals that appear in the formulation are evoked and the adaptive time stepping scheme for the coupled drop and surfactant evolution is designed with a preconditioned implicit treatment of the surfactant diffusion.

  17. Triangulation-based 3D surveying borescope

    NASA Astrophysics Data System (ADS)

    Pulwer, S.; Steglich, P.; Villringer, C.; Bauer, J.; Burger, M.; Franz, M.; Grieshober, K.; Wirth, F.; Blondeau, J.; Rautenberg, J.; Mouti, S.; Schrader, S.

    2016-04-01

    In this work, a measurement concept based on triangulation was developed for borescopic 3D-surveying of surface defects. The integration of such measurement system into a borescope environment requires excellent space utilization. The triangulation angle, the projected pattern, the numerical apertures of the optical system, and the viewing angle were calculated using partial coherence imaging and geometric optical raytracing methods. Additionally, optical aberrations and defocus were considered by the integration of Zernike polynomial coefficients. The measurement system is able to measure objects with a size of 50 μm in all dimensions with an accuracy of +/- 5 μm. To manage the issue of a low depth of field while using an optical high resolution system, a wavelength dependent aperture was integrated. Thereby, we are able to control depth of field and resolution of the optical system and can use the borescope in measurement mode with high resolution and low depth of field or in inspection mode with low resolution and higher depth of field. First measurements of a demonstrator system are in good agreement with our simulations.

  18. High-order asynchrony-tolerant finite difference schemes for partial differential equations

    NASA Astrophysics Data System (ADS)

    Aditya, Konduri; Donzis, Diego A.

    2017-12-01

    Synchronizations of processing elements (PEs) in massively parallel simulations, which arise due to communication or load imbalances between PEs, significantly affect the scalability of scientific applications. We have recently proposed a method based on finite-difference schemes to solve partial differential equations in an asynchronous fashion - synchronization between PEs is relaxed at a mathematical level. While standard schemes can maintain their stability in the presence of asynchrony, their accuracy is drastically affected. In this work, we present a general methodology to derive asynchrony-tolerant (AT) finite difference schemes of arbitrary order of accuracy, which can maintain their accuracy when synchronizations are relaxed. We show that there are several choices available in selecting a stencil to derive these schemes and discuss their effect on numerical and computational performance. We provide a simple classification of schemes based on the stencil and derive schemes that are representative of different classes. Their numerical error is rigorously analyzed within a statistical framework to obtain the overall accuracy of the solution. Results from numerical experiments are used to validate the performance of the schemes.

  19. Artificial Boundary Conditions Based on the Difference Potentials Method

    NASA Technical Reports Server (NTRS)

    Tsynkov, Semyon V.

    1996-01-01

    While numerically solving a problem initially formulated on an unbounded domain, one typically truncates this domain, which necessitates setting the artificial boundary conditions (ABC's) at the newly formed external boundary. The issue of setting the ABC's appears to be most significant in many areas of scientific computing, for example, in problems originating from acoustics, electrodynamics, solid mechanics, and fluid dynamics. In particular, in computational fluid dynamics (where external problems present a wide class of practically important formulations) the proper treatment of external boundaries may have a profound impact on the overall quality and performance of numerical algorithms. Most of the currently used techniques for setting the ABC's can basically be classified into two groups. The methods from the first group (global ABC's) usually provide high accuracy and robustness of the numerical procedure but often appear to be fairly cumbersome and (computationally) expensive. The methods from the second group (local ABC's) are, as a rule, algorithmically simple, numerically cheap, and geometrically universal; however, they usually lack accuracy of computations. In this paper we first present a survey and provide a comparative assessment of different existing methods for constructing the ABC's. Then, we describe a relatively new ABC's technique of ours and review the corresponding results. This new technique, in our opinion, is currently one of the most promising in the field. It enables one to construct such ABC's that combine the advantages relevant to the two aforementioned classes of existing methods. Our approach is based on application of the difference potentials method attributable to V. S. Ryaben'kii. This approach allows us to obtain highly accurate ABC's in the form of certain (nonlocal) boundary operator equations. The operators involved are analogous to the pseudodifferential boundary projections first introduced by A. P. Calderon and then also studied by R. T. Seeley. The apparatus of the boundary pseudodifferential equations, which has formerly been used mostly in the qualitative theory of integral equations and PDE'S, is now effectively employed for developing numerical methods in the different fields of scientific computing.

  20. Higher-Order Compact Schemes for Numerical Simulation of Incompressible Flows

    NASA Technical Reports Server (NTRS)

    Wilson, Robert V.; Demuren, Ayodeji O.; Carpenter, Mark

    1998-01-01

    A higher order accurate numerical procedure has been developed for solving incompressible Navier-Stokes equations for 2D or 3D fluid flow problems. It is based on low-storage Runge-Kutta schemes for temporal discretization and fourth and sixth order compact finite-difference schemes for spatial discretization. The particular difficulty of satisfying the divergence-free velocity field required in incompressible fluid flow is resolved by solving a Poisson equation for pressure. It is demonstrated that for consistent global accuracy, it is necessary to employ the same order of accuracy in the discretization of the Poisson equation. Special care is also required to achieve the formal temporal accuracy of the Runge-Kutta schemes. The accuracy of the present procedure is demonstrated by application to several pertinent benchmark problems.

  1. Number-Density Measurements of CO2 in Real Time with an Optical Frequency Comb for High Accuracy and Precision

    NASA Astrophysics Data System (ADS)

    Scholten, Sarah K.; Perrella, Christopher; Anstie, James D.; White, Richard T.; Al-Ashwal, Waddah; Hébert, Nicolas Bourbeau; Genest, Jérôme; Luiten, Andre N.

    2018-05-01

    Real-time and accurate measurements of gas properties are highly desirable for numerous real-world applications. Here, we use an optical-frequency comb to demonstrate absolute number-density and temperature measurements of a sample gas with state-of-the-art precision and accuracy. The technique is demonstrated by measuring the number density of 12C16O2 with an accuracy of better than 1% and a precision of 0.04% in a measurement and analysis cycle of less than 1 s. This technique is transferable to numerous molecular species, thus offering an avenue for near-universal gas concentration measurements.

  2. Executive Function Effects and Numerical Development in Children: Behavioural and ERP Evidence from a Numerical Stroop Paradigm

    ERIC Educational Resources Information Center

    Soltesz, Fruzsina; Goswami, Usha; White, Sonia; Szucs, Denes

    2011-01-01

    Most research on numerical development in children is behavioural, focusing on accuracy and response time in different problem formats. However, Temple and Posner (1998) used ERPs and the numerical distance task with 5-year-olds to show that the development of numerical representations is difficult to disentangle from the development of the…

  3. Accuracy of the domain method for the material derivative approach to shape design sensitivities

    NASA Technical Reports Server (NTRS)

    Yang, R. J.; Botkin, M. E.

    1987-01-01

    Numerical accuracy for the boundary and domain methods of the material derivative approach to shape design sensitivities is investigated through the use of mesh refinement. The results show that the domain method is generally more accurate than the boundary method, using the finite element technique. It is also shown that the domain method is equivalent, under certain assumptions, to the implicit differentiation approach not only theoretically but also numerically.

  4. Enviro-HIRLAM online integrated meteorology-chemistry modelling system: strategy, methodology, developments and applications (v7.2)

    NASA Astrophysics Data System (ADS)

    Baklanov, Alexander; Smith Korsholm, Ulrik; Nuterman, Roman; Mahura, Alexander; Pagh Nielsen, Kristian; Hansen Sass, Bent; Rasmussen, Alix; Zakey, Ashraf; Kaas, Eigil; Kurganskiy, Alexander; Sørensen, Brian; González-Aparicio, Iratxe

    2017-08-01

    The Environment - High Resolution Limited Area Model (Enviro-HIRLAM) is developed as a fully online integrated numerical weather prediction (NWP) and atmospheric chemical transport (ACT) model for research and forecasting of joint meteorological, chemical and biological weather. The integrated modelling system is developed by the Danish Meteorological Institute (DMI) in collaboration with several European universities. It is the baseline system in the HIRLAM Chemical Branch and used in several countries and different applications. The development was initiated at DMI more than 15 years ago. The model is based on the HIRLAM NWP model with online integrated pollutant transport and dispersion, chemistry, aerosol dynamics, deposition and atmospheric composition feedbacks. To make the model suitable for chemical weather forecasting in urban areas, the meteorological part was improved by implementation of urban parameterisations. The dynamical core was improved by implementing a locally mass-conserving semi-Lagrangian numerical advection scheme, which improves forecast accuracy and model performance. The current version (7.2), in comparison with previous versions, has a more advanced and cost-efficient chemistry, aerosol multi-compound approach, aerosol feedbacks (direct and semi-direct) on radiation and (first and second indirect effects) on cloud microphysics. Since 2004, the Enviro-HIRLAM has been used for different studies, including operational pollen forecasting for Denmark since 2009 and operational forecasting atmospheric composition with downscaling for China since 2017. Following the main research and development strategy, further model developments will be extended towards the new NWP platform - HARMONIE. Different aspects of online coupling methodology, research strategy and possible applications of the modelling system, and fit-for-purpose model configurations for the meteorological and air quality communities are discussed.

  5. Robust Decision-making Applied to Model Selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemez, Francois M.

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define eachmore » of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.« less

  6. Discretizing singular point sources in hyperbolic wave propagation problems

    DOE PAGES

    Petersson, N. Anders; O'Reilly, Ossian; Sjogreen, Bjorn; ...

    2016-06-01

    Here, we develop high order accurate source discretizations for hyperbolic wave propagation problems in first order formulation that are discretized by finite difference schemes. By studying the Fourier series expansions of the source discretization and the finite difference operator, we derive sufficient conditions for achieving design accuracy in the numerical solution. Only half of the conditions in Fourier space can be satisfied through moment conditions on the source discretization, and we develop smoothness conditions for satisfying the remaining accuracy conditions. The resulting source discretization has compact support in physical space, and is spread over as many grid points as themore » number of moment and smoothness conditions. In numerical experiments we demonstrate high order of accuracy in the numerical solution of the 1-D advection equation (both in the interior and near a boundary), the 3-D elastic wave equation, and the 3-D linearized Euler equations.« less

  7. Simple satellite orbit propagator

    NASA Astrophysics Data System (ADS)

    Gurfil, P.

    2008-06-01

    An increasing number of space missions require on-board autonomous orbit determination. The purpose of this paper is to develop a simple orbit propagator (SOP) for such missions. Since most satellites are limited by the available processing power, it is important to develop an orbit propagator that will use limited computational and memory resources. In this work, we show how to choose state variables for propagation using the simplest numerical integration scheme available-the explicit Euler integrator. The new state variables are derived by the following rationale: Apply a variation-of-parameters not on the gravity-affected orbit, but rather on the gravity-free orbit, and teart the gravity as a generalized force. This ultimately leads to a state vector comprising the inertial velocity and a modified position vector, wherein the product of velocity and time is subtracted from the inertial position. It is shown that the explicit Euler integrator, applied on the new state variables, becomes a symplectic integrator, preserving the Hamiltonian and the angular momentum (or a component thereof in the case of oblateness perturbations). The main application of the proposed propagator is estimation of mean orbital elements. It is shown that the SOP is capable of estimating the mean elements with an accuracy that is comparable to a high-order integrator that consumes an order-of-magnitude more computational time than the SOP.

  8. Maximum Correntropy Unscented Kalman Filter for Ballistic Missile Navigation System based on SINS/CNS Deeply Integrated Mode.

    PubMed

    Hou, Bowen; He, Zhangming; Li, Dong; Zhou, Haiyin; Wang, Jiongqi

    2018-05-27

    Strap-down inertial navigation system/celestial navigation system ( SINS/CNS) integrated navigation is a high precision navigation technique for ballistic missiles. The traditional navigation method has a divergence in the position error. A deeply integrated mode for SINS/CNS navigation system is proposed to improve the navigation accuracy of ballistic missile. The deeply integrated navigation principle is described and the observability of the navigation system is analyzed. The nonlinearity, as well as the large outliers and the Gaussian mixture noises, often exists during the actual navigation process, leading to the divergence phenomenon of the navigation filter. The new nonlinear Kalman filter on the basis of the maximum correntropy theory and unscented transformation, named the maximum correntropy unscented Kalman filter, is deduced, and the computational complexity is analyzed. The unscented transformation is used for restricting the nonlinearity of the system equation, and the maximum correntropy theory is used to deal with the non-Gaussian noises. Finally, numerical simulation illustrates the superiority of the proposed filter compared with the traditional unscented Kalman filter. The comparison results show that the large outliers and the influence of non-Gaussian noises for SINS/CNS deeply integrated navigation is significantly reduced through the proposed filter.

  9. Cancer survival classification using integrated data sets and intermediate information.

    PubMed

    Kim, Shinuk; Park, Taesung; Kon, Mark

    2014-09-01

    Although numerous studies related to cancer survival have been published, increasing the prediction accuracy of survival classes still remains a challenge. Integration of different data sets, such as microRNA (miRNA) and mRNA, might increase the accuracy of survival class prediction. Therefore, we suggested a machine learning (ML) approach to integrate different data sets, and developed a novel method based on feature selection with Cox proportional hazard regression model (FSCOX) to improve the prediction of cancer survival time. FSCOX provides us with intermediate survival information, which is usually discarded when separating survival into 2 groups (short- and long-term), and allows us to perform survival analysis. We used an ML-based protocol for feature selection, integrating information from miRNA and mRNA expression profiles at the feature level. To predict survival phenotypes, we used the following classifiers, first, existing ML methods, support vector machine (SVM) and random forest (RF), second, a new median-based classifier using FSCOX (FSCOX_median), and third, an SVM classifier using FSCOX (FSCOX_SVM). We compared these methods using 3 types of cancer tissue data sets: (i) miRNA expression, (ii) mRNA expression, and (iii) combined miRNA and mRNA expression. The latter data set included features selected either from the combined miRNA/mRNA profile or independently from miRNAs and mRNAs profiles (IFS). In the ovarian data set, the accuracy of survival classification using the combined miRNA/mRNA profiles with IFS was 75% using RF, 86.36% using SVM, 84.09% using FSCOX_median, and 88.64% using FSCOX_SVM with a balanced 22 short-term and 22 long-term survivor data set. These accuracies are higher than those using miRNA alone (70.45%, RF; 75%, SVM; 75%, FSCOX_median; and 75%, FSCOX_SVM) or mRNA alone (65.91%, RF; 63.64%, SVM; 72.73%, FSCOX_median; and 70.45%, FSCOX_SVM). Similarly in the glioblastoma multiforme data, the accuracy of miRNA/mRNA using IFS was 75.51% (RF), 87.76% (SVM) 85.71% (FSCOX_median), 85.71% (FSCOX_SVM). These results are higher than the results of using miRNA expression and mRNA expression alone. In addition we predict 16 hsa-miR-23b and hsa-miR-27b target genes in ovarian cancer data sets, obtained by SVM-based feature selection through integration of sequence information and gene expression profiles. Among the approaches used, the integrated miRNA and mRNA data set yielded better results than the individual data sets. The best performance was achieved using the FSCOX_SVM method with independent feature selection, which uses intermediate survival information between short-term and long-term survival time and the combination of the 2 different data sets. The results obtained using the combined data set suggest that there are some strong interactions between miRNA and mRNA features that are not detectable in the individual analyses. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Spatial and temporal accuracy of asynchrony-tolerant finite difference schemes for partial differential equations at extreme scales

    NASA Astrophysics Data System (ADS)

    Kumari, Komal; Donzis, Diego

    2017-11-01

    Highly resolved computational simulations on massively parallel machines are critical in understanding the physics of a vast number of complex phenomena in nature governed by partial differential equations. Simulations at extreme levels of parallelism present many challenges with communication between processing elements (PEs) being a major bottleneck. In order to fully exploit the computational power of exascale machines one needs to devise numerical schemes that relax global synchronizations across PEs. This asynchronous computations, however, have a degrading effect on the accuracy of standard numerical schemes.We have developed asynchrony-tolerant (AT) schemes that maintain order of accuracy despite relaxed communications. We show, analytically and numerically, that these schemes retain their numerical properties with multi-step higher order temporal Runge-Kutta schemes. We also show that for a range of optimized parameters,the computation time and error for AT schemes is less than their synchronous counterpart. Stability of the AT schemes which depends upon history and random nature of delays, are also discussed. Support from NSF is gratefully acknowledged.

  11. On the Magnetic Squashing Factor and the Lie Transport of Tangents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Roger B.; Pontin, David I.; Hornig, Gunnar

    The squashing factor (or squashing degree) of a vector field is a quantitative measure of the deformation of the field line mapping between two surfaces. In the context of solar magnetic fields, it is often used to identify gradients in the mapping of elementary magnetic flux tubes between various flux domains. Regions where these gradients in the mapping are large are referred to as quasi-separatrix layers (QSLs), and are a continuous extension of separators and separatrix surfaces. These QSLs are observed to be potential sites for the formation of strong electric currents, and are therefore important for the study ofmore » magnetic reconnection in three dimensions. Since the squashing factor, Q , is defined in terms of the Jacobian of the field line mapping, it is most often calculated by first determining the mapping between two surfaces (or some approximation of it) and then numerically differentiating. Tassev and Savcheva have introduced an alternative method, in which they parameterize the change in separation between adjacent field lines, and then integrate along individual field lines to get an estimate of the Jacobian without the need to numerically differentiate the mapping itself. But while their method offers certain computational advantages, it is formulated on a perturbative description of the field line trajectory, and the accuracy of this method is not entirely clear. Here we show, through an alternative derivation, that this integral formulation is, in principle, exact. We then demonstrate the result in the case of a linear, 3D magnetic null, which allows for an exact analytical description and direct comparison to numerical estimates.« less

  12. Development of a defect stream function, law of the wall/wake method for compressible turbulent boundary layers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wahls, Richard A.

    1990-01-01

    The method presented is designed to improve the accuracy and computational efficiency of existing numerical methods for the solution of flows with compressible turbulent boundary layers. A compressible defect stream function formulation of the governing equations assuming an arbitrary turbulence model is derived. This formulation is advantageous because it has a constrained zero-order approximation with respect to the wall shear stress and the tangential momentum equation has a first integral. Previous problems with this type of formulation near the wall are eliminated by using empirically based analytic expressions to define the flow near the wall. The van Driest law of the wall for velocity and the modified Crocco temperature-velocity relationship are used. The associated compressible law of the wake is determined and it extends the valid range of the analytical expressions beyond the logarithmic region of the boundary layer. The need for an inner-region eddy viscosity model is completely avoided. The near-wall analytic expressions are patched to numerically computed outer region solutions at a point determined during the computation. A new boundary condition on the normal derivative of the tangential velocity at the surface is presented; this condition replaces the no-slip condition and enables numerical integration to the surface with a relatively coarse grid using only an outer region turbulence model. The method was evaluated for incompressible and compressible equilibrium flows and was implemented into an existing Navier-Stokes code using the assumption of local equilibrium flow with respect to the patching. The method has proven to be accurate and efficient.

  13. Developing a Weighted Measure of Speech Sound Accuracy

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2011-01-01

    Purpose: To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound…

  14. Entropy Splitting and Numerical Dissipation

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Vinokur, M.; Djomehri, M. J.

    1999-01-01

    A rigorous stability estimate for arbitrary order of accuracy of spatial central difference schemes for initial-boundary value problems of nonlinear symmetrizable systems of hyperbolic conservation laws was established recently by Olsson and Oliger (1994) and Olsson (1995) and was applied to the two-dimensional compressible Euler equations for a perfect gas by Gerritsen and Olsson (1996) and Gerritsen (1996). The basic building block in developing the stability estimate is a generalized energy approach based on a special splitting of the flux derivative via a convex entropy function and certain homogeneous properties. Due to some of the unique properties of the compressible Euler equations for a perfect gas, the splitting resulted in the sum of a conservative portion and a non-conservative portion of the flux derivative. hereafter referred to as the "Entropy Splitting." There are several potential desirable attributes and side benefits of the entropy splitting for the compressible Euler equations that were not fully explored in Gerritsen and Olsson. The paper has several objectives. The first is to investigate the choice of the arbitrary parameter that determines the amount of splitting and its dependence on the type of physics of current interest to computational fluid dynamics. The second is to investigate in what manner the splitting affects the nonlinear stability of the central schemes for long time integrations of unsteady flows such as in nonlinear aeroacoustics and turbulence dynamics. If numerical dissipation indeed is needed to stabilize the central scheme, can the splitting help minimize the numerical dissipation compared to its un-split cousin? Extensive numerical study on the vortex preservation capability of the splitting in conjunction with central schemes for long time integrations will be presented. The third is to study the effect of the non-conservative proportion of splitting in obtaining the correct shock location for high speed complex shock-turbulence interactions. The fourth is to determine if this method can be extended to other physical equations of state and other evolutionary equation sets. If numerical dissipation is needed, the Yee, Sandham, and Djomehri (1999) numerical dissipation is employed. The Yee et al. schemes fit in the Olsson and Oliger framework.

  15. An integrated approach to monitoring the calibration stability of operational dual-polarization radars

    DOE PAGES

    Vaccarono, Mattia; Bechini, Renzo; Chandrasekar, Chandra V.; ...

    2016-11-08

    The stability of weather radar calibration is a mandatory aspect for quantitative applications, such as rainfall estimation, short-term weather prediction and initialization of numerical atmospheric and hydrological models. Over the years, calibration monitoring techniques based on external sources have been developed, specifically calibration using the Sun and calibration based on ground clutter returns. In this paper, these two techniques are integrated and complemented with a self-consistency procedure and an intercalibration technique. The aim of the integrated approach is to implement a robust method for online monitoring, able to detect significant changes in the radar calibration. The physical consistency of polarimetricmore » radar observables is exploited using the self-consistency approach, based on the expected correspondence between dual-polarization power and phase measurements in rain. This technique allows a reference absolute value to be provided for the radar calibration, from which eventual deviations may be detected using the other procedures. In particular, the ground clutter calibration is implemented on both polarization channels (horizontal and vertical) for each radar scan, allowing the polarimetric variables to be monitored and hardware failures to promptly be recognized. The Sun calibration allows monitoring the calibration and sensitivity of the radar receiver, in addition to the antenna pointing accuracy. It is applied using observations collected during the standard operational scans but requires long integration times (several days) in order to accumulate a sufficient amount of useful data. Finally, an intercalibration technique is developed and performed to compare colocated measurements collected in rain by two radars in overlapping regions. The integrated approach is performed on the C-band weather radar network in northwestern Italy, during July–October 2014. The set of methods considered appears suitable to establish an online tool to monitor the stability of the radar calibration with an accuracy of about 2 dB. In conclusion, this is considered adequate to automatically detect any unexpected change in the radar system requiring further data analysis or on-site measurements.« less

  16. Height system unification based on the Fixed Geodetic Boundary Value Problem with limited availability of gravity data

    NASA Astrophysics Data System (ADS)

    Porz, Lucas; Grombein, Thomas; Seitz, Kurt; Heck, Bernhard; Wenzel, Friedemann

    2017-04-01

    Regional height reference systems are generally related to individual vertical datums defined by specific tide gauges. The discrepancies of these vertical datums with respect to a unified global datum cause height system biases that range in an order of 1-2 m at a global scale. One approach for unification of height systems relates to the solution of a Geodetic Boundary Value Problem (GBVP). In particular, the fixed GBVP, using gravity disturbances as boundary values, is solved at GNSS/leveling benchmarks, whereupon height datum offsets can be estimated by least squares adjustment. In spherical approximation, the solution of the fixed GBVP is obtained by Hotine's spherical integral formula. However, this method relies on the global availability of gravity data. In practice, gravity data of the necessary resolution and accuracy is not accessible globally. Thus, the integration is restricted to an area within the vicinity of the computation points. The resulting truncation error can reach several meters in height, making height system unification without further consideration of this effect unfeasible. This study analyzes methods for reducing the truncation error by combining terrestrial gravity data with satellite-based global geopotential models and by modifying the integral kernel in order to accelerate the convergence of the resulting potential. For this purpose, EGM2008-derived gravity functionals are used as pseudo-observations to be integrated numerically. Geopotential models of different spectral degrees are implemented using a remove-restore-scheme. Three types of modification are applied to the Hotine-kernel and the convergence of the resulting potential is analyzed. In a further step, the impact of these operations on the estimation of height datum offsets is investigated within a closed loop simulation. A minimum integration radius in combination with a specific modification of the Hotine-kernel is suggested in order to achieve sub-cm accuracy for the estimation of height datum offsets.

  17. Digital phase-locked-loop speed sensor for accuracy improvement in analog speed controls. [feedback control and integrated circuits

    NASA Technical Reports Server (NTRS)

    Birchenough, A. G.

    1975-01-01

    A digital speed control that can be combined with a proportional analog controller is described. The stability and transient response of the analog controller were retained and combined with the long-term accuracy of a crystal-controlled integral controller. A relatively simple circuit was developed by using phase-locked-loop techniques and total error storage. The integral digital controller will maintain speed control accuracy equal to that of the crystal reference oscillator.

  18. Integrated Wind Power Planning Tool

    NASA Astrophysics Data System (ADS)

    Rosgaard, Martin; Giebel, Gregor; Skov Nielsen, Torben; Hahmann, Andrea; Sørensen, Poul; Madsen, Henrik

    2013-04-01

    This poster presents the current state of the public service obligation (PSO) funded project PSO 10464, with the title "Integrated Wind Power Planning Tool". The goal is to integrate a mesoscale numerical weather prediction (NWP) model with purely statistical tools in order to assess wind power fluctuations, with focus on long term power system planning for future wind farms as well as short term forecasting for existing wind farms. Currently, wind power fluctuation models are either purely statistical or integrated with NWP models of limited resolution. Using the state-of-the-art mesoscale NWP model Weather Research & Forecasting model (WRF) the forecast error is sought quantified in dependence of the time scale involved. This task constitutes a preparative study for later implementation of features accounting for NWP forecast errors in the DTU Wind Energy maintained Corwind code - a long term wind power planning tool. Within the framework of PSO 10464 research related to operational short term wind power prediction will be carried out, including a comparison of forecast quality at different mesoscale NWP model resolutions and development of a statistical wind power prediction tool taking input from WRF. The short term prediction part of the project is carried out in collaboration with ENFOR A/S; a Danish company that specialises in forecasting and optimisation for the energy sector. The integrated prediction model will allow for the description of the expected variability in wind power production in the coming hours to days, accounting for its spatio-temporal dependencies, and depending on the prevailing weather conditions defined by the WRF output. The output from the integrated short term prediction tool constitutes scenario forecasts for the coming period, which can then be fed into any type of system model or decision making problem to be solved. The high resolution of the WRF results loaded into the integrated prediction model will ensure a high accuracy data basis is available for use in the decision making process of the Danish transmission system operator. The need for high accuracy predictions will only increase over the next decade as Denmark approaches the goal of 50% wind power based electricity in 2025 from the current 20%.

  19. Understanding dental CAD/CAM for restorations--dental milling machines from a mechanical engineering viewpoint. Part B: labside milling machines.

    PubMed

    Lebon, Nicolas; Tapie, Laurent; Duret, Francois; Attal, Jean-Pierre

    2016-01-01

    Nowadays, dental numerical controlled (NC) milling machines are available for dental laboratories (labside solution) and dental production centers. This article provides a mechanical engineering approach to NC milling machines to help dental technicians understand the involvement of technology in digital dentistry practice. The technical and economic criteria are described for four labside and two production center dental NC milling machines available on the market. The technical criteria are focused on the capacities of the embedded technologies of milling machines to mill prosthetic materials and various restoration shapes. The economic criteria are focused on investment cost and interoperability with third-party software. The clinical relevance of the technology is discussed through the accuracy and integrity of the restoration. It can be asserted that dental production center milling machines offer a wider range of materials and types of restoration shapes than labside solutions, while labside solutions offer a wider range than chairside solutions. The accuracy and integrity of restorations may be improved as a function of the embedded technologies provided. However, the more complex the technical solutions available, the more skilled the user must be. Investment cost and interoperability with third-party software increase according to the quality of the embedded technologies implemented. Each private dental practice may decide which fabrication option to use depending on the scope of the practice.

  20. Industrial Raman gas sensing for real-time system control

    NASA Astrophysics Data System (ADS)

    Buric, M.; Mullen, J.; Chorpening, B.; Woodruff, S.

    2014-06-01

    Opportunities exist to improve on-line process control in energy applications with a fast, non-destructive measurement of gas composition. Here, we demonstrate a Raman sensing system which is capable of reporting the concentrations of numerous species simultaneously with sub-percent accuracy and sampling times below one-second for process control applications in energy or chemical production. The sensor is based upon a hollow-core capillary waveguide with a 300 micron bore with reflective thin-film metal and dielectric linings. The effect of using such a waveguide in a Raman process is to integrate Raman photons along the length of the sample-filled waveguide, thus permitting the acquisition of very large Raman signals for low-density gases in a short time. The resultant integrated Raman signals can then be used for quick and accurate analysis of a gaseous mixture. The sensor is currently being tested for energy applications such as coal gasification, turbine control, well-head monitoring for exploration or production, and non-conventional gas utilization. In conjunction with an ongoing commercialization effort, the researchers have recently completed two prototype instruments suitable for hazardous area operation and testing. Here, we report pre-commercialization testing of those field prototypes for control applications in gasification or similar processes. Results will be discussed with respect to accuracy, calibration requirements, gas sampling techniques, and possible control strategies of industrial significance.

Top