Sample records for polynomial expansion method

  1. Investigation of advanced UQ for CRUD prediction with VIPRE.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldred, Michael Scott

    2011-09-01

    This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less

  2. State-vector formalism and the Legendre polynomial solution for modelling guided waves in anisotropic plates

    NASA Astrophysics Data System (ADS)

    Zheng, Mingfang; He, Cunfu; Lu, Yan; Wu, Bin

    2018-01-01

    We presented a numerical method to solve phase dispersion curve in general anisotropic plates. This approach involves an exact solution to the problem in the form of the Legendre polynomial of multiple integrals, which we substituted into the state-vector formalism. In order to improve the efficiency of the proposed method, we made a special effort to demonstrate the analytical methodology. Furthermore, we analyzed the algebraic symmetries of the matrices in the state-vector formalism for anisotropic plates. The basic feature of the proposed method was the expansion of field quantities by Legendre polynomials. The Legendre polynomial method avoid to solve the transcendental dispersion equation, which can only be solved numerically. This state-vector formalism combined with Legendre polynomial expansion distinguished the adjacent dispersion mode clearly, even when the modes were very close. We then illustrated the theoretical solutions of the dispersion curves by this method for isotropic and anisotropic plates. Finally, we compared the proposed method with the global matrix method (GMM), which shows excellent agreement.

  3. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less

  4. Polynomial probability distribution estimation using the method of moments

    PubMed Central

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  5. Polynomial probability distribution estimation using the method of moments.

    PubMed

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  6. Analytical and numerical construction of vertical periodic orbits about triangular libration points based on polynomial expansion relations among directions

    NASA Astrophysics Data System (ADS)

    Qian, Ying-Jing; Yang, Xiao-Dong; Zhai, Guan-Qiao; Zhang, Wei

    2017-08-01

    Innovated by the nonlinear modes concept in the vibrational dynamics, the vertical periodic orbits around the triangular libration points are revisited for the Circular Restricted Three-body Problem. The ζ -component motion is treated as the dominant motion and the ξ and η -component motions are treated as the slave motions. The slave motions are in nature related to the dominant motion through the approximate nonlinear polynomial expansions with respect to the ζ -position and ζ -velocity during the one of the periodic orbital motions. By employing the relations among the three directions, the three-dimensional system can be transferred into one-dimensional problem. Then the approximate three-dimensional vertical periodic solution can be analytically obtained by solving the dominant motion only on ζ -direction. To demonstrate the effectiveness of the proposed method, an accuracy study was carried out to validate the polynomial expansion (PE) method. As one of the applications, the invariant nonlinear relations in polynomial expansion form are used as constraints to obtain numerical solutions by differential correction. The nonlinear relations among the directions provide an alternative point of view to explore the overall dynamics of periodic orbits around libration points with general rules.

  7. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  8. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE PAGES

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    2017-06-22

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  9. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  10. Local polynomial chaos expansion for linear differential equations with high dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Jakeman, John; Gittelson, Claude

    2015-01-08

    In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained frommore » the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.« less

  11. Uncertainty Quantification in CO 2 Sequestration Using Surrogate Models from Polynomial Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Sahinidis, Nikolaos V.

    2013-03-06

    In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using amore » classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.« less

  12. A Stochastic Mixed Finite Element Heterogeneous Multiscale Method for Flow in Porous Media

    DTIC Science & Technology

    2010-08-01

    applicable for flow in porous media has drawn significant interest in the last few years. Several techniques like generalized polynomial chaos expansions (gPC...represents the stochastic solution as a polynomial approxima- tion. This interpolant is constructed via independent function calls to the de- terministic...of orthogonal polynomials [34,38] or sparse grid approximations [39–41]. It is well known that the global polynomial interpolation cannot resolve lo

  13. On the coefficients of integrated expansions and integrals of ultraspherical polynomials and their applications for solving differential equations

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2002-02-01

    An analytical formula expressing the ultraspherical coefficients of an expansion for an infinitely differentiable function that has been integrated an arbitrary number of times in terms of the coefficients of the original expansion of the function is stated in a more compact form and proved in a simpler way than the formula suggested by Phillips and Karageorghis (27 (1990) 823). A new formula expressing explicitly the integrals of ultraspherical polynomials of any degree that has been integrated an arbitrary number of times of ultraspherical polynomials is given. The tensor product of ultraspherical polynomials is used to approximate a function of more than one variable. Formulae expressing the coefficients of differentiated expansions of double and triple ultraspherical polynomials in terms of the original expansion are stated and proved. Some applications of how to use ultraspherical polynomials for solving ordinary and partial differential equations are described.

  14. A new numerical treatment based on Lucas polynomials for 1D and 2D sinh-Gordon equation

    NASA Astrophysics Data System (ADS)

    Oruç, Ömer

    2018-04-01

    In this paper, a new mixed method based on Lucas and Fibonacci polynomials is developed for numerical solutions of 1D and 2D sinh-Gordon equations. Firstly time variable discretized by central finite difference and then unknown function and its derivatives are expanded to Lucas series. With the help of these series expansion and Fibonacci polynomials, matrices for differentiation are derived. With this approach, finding the solution of sinh-Gordon equation transformed to finding the solution of an algebraic system of equations. Lucas series coefficients are acquired by solving this system of algebraic equations. Then by plugginging these coefficients into Lucas series expansion numerical solutions can be obtained consecutively. The main objective of this paper is to demonstrate that Lucas polynomial based method is convenient for 1D and 2D nonlinear problems. By calculating L2 and L∞ error norms of some 1D and 2D test problems efficiency and performance of the proposed method is monitored. Acquired accurate results confirm the applicability of the method.

  15. Fitting by Orthonormal Polynomials of Silver Nanoparticles Spectroscopic Data

    NASA Astrophysics Data System (ADS)

    Bogdanova, Nina; Koleva, Mihaela

    2018-02-01

    Our original Orthonormal Polynomial Expansion Method (OPEM) in one-dimensional version is applied for first time to describe the silver nanoparticles (NPs) spectroscopic data. The weights for approximation include experimental errors in variables. In this way we construct orthonormal polynomial expansion for approximating the curve on a non equidistant point grid. The corridors of given data and criteria define the optimal behavior of searched curve. The most important subinterval of spectra data is investigated, where the minimum (surface plasmon resonance absorption) is looking for. This study describes the Ag nanoparticles produced by laser approach in a ZnO medium forming a AgNPs/ZnO nanocomposite heterostructure.

  16. Lattice Boltzmann method for bosons and fermions and the fourth-order Hermite polynomial expansion.

    PubMed

    Coelho, Rodrigo C V; Ilha, Anderson; Doria, Mauro M; Pereira, R M; Aibe, Valter Yoshihiko

    2014-04-01

    The Boltzmann equation with the Bhatnagar-Gross-Krook collision operator is considered for the Bose-Einstein and Fermi-Dirac equilibrium distribution functions. We show that the expansion of the microscopic velocity in terms of Hermite polynomials must be carried to the fourth order to correctly describe the energy equation. The viscosity and thermal coefficients, previously obtained by Yang et al. [Shi and Yang, J. Comput. Phys. 227, 9389 (2008); Yang and Hung, Phys. Rev. E 79, 056708 (2009)] through the Uehling-Uhlenbeck approach, are also derived here. Thus the construction of a lattice Boltzmann method for the quantum fluid is possible provided that the Bose-Einstein and Fermi-Dirac equilibrium distribution functions are expanded to fourth order in the Hermite polynomials.

  17. Applicability of the polynomial chaos expansion method for personalization of a cardiovascular pulse wave propagation model.

    PubMed

    Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N

    2014-12-01

    Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.

  18. Enhancing sparsity of Hermite polynomial expansions by iterative rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Baker, Nathan A.

    2016-02-01

    Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.

  19. A discrete method for modal analysis of overhead line conductor bundles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Migdalovici, M.A.; Sireteanu, T.D.; Albrecht, A.A.

    The paper presents a mathematical model and a semi-analytical procedure to calculate the vibration modes and eigenfrequencies of single or bundled conductors with spacers which are needed for evaluation of the wind induced vibration of conductors and for optimization of spacer-dampers placement. The method consists in decomposition of conductors in modules and the expansion by polynomial series of unknown displacements on each module. A complete system of polynomials are deduced for this by Legendre polynomials. Each module is considered either boundary conditions at the extremity of the module or the continuity conditions between the modules and also a number ofmore » projections of module equilibrium equation on the polynomials from the expansion series of unknown displacement. The global system of the eigenmodes and eigenfrequencies is of the matrix form: A X + {omega}{sup 2} M X = 0. The theoretical considerations are exemplified on one conductor and on bundle of two conductors with spacers. From this, a method for forced vibration calculus of a single or bundled conductors is also presented.« less

  20. Polynomial Chaos Based Acoustic Uncertainty Predictions from Ocean Forecast Ensembles

    NASA Astrophysics Data System (ADS)

    Dennis, S.

    2016-02-01

    Most significant ocean acoustic propagation occurs at tens of kilometers, at scales small compared basin and to most fine scale ocean modeling. To address the increased emphasis on uncertainty quantification, for example transmission loss (TL) probability density functions (PDF) within some radius, a polynomial chaos (PC) based method is utilized. In order to capture uncertainty in ocean modeling, Navy Coastal Ocean Model (NCOM) now includes ensembles distributed to reflect the ocean analysis statistics. Since the ensembles are included in the data assimilation for the new forecast ensembles, the acoustic modeling uses the ensemble predictions in a similar fashion for creating sound speed distribution over an acoustically relevant domain. Within an acoustic domain, singular value decomposition over the combined time-space structure of the sound speeds can be used to create Karhunen-Loève expansions of sound speed, subject to multivariate normality testing. These sound speed expansions serve as a basis for Hermite polynomial chaos expansions of derived quantities, in particular TL. The PC expansion coefficients result from so-called non-intrusive methods, involving evaluation of TL at multi-dimensional Gauss-Hermite quadrature collocation points. Traditional TL calculation from standard acoustic propagation modeling could be prohibitively time consuming at all multi-dimensional collocation points. This method employs Smolyak order and gridding methods to allow adaptive sub-sampling of the collocation points to determine only the most significant PC expansion coefficients to within a preset tolerance. Practically, the Smolyak order and grid sizes grow only polynomially in the number of Karhunen-Loève terms, alleviating the curse of dimensionality. The resulting TL PC coefficients allow the determination of TL PDF normality and its mean and standard deviation. In the non-normal case, PC Monte Carlo methods are used to rapidly establish the PDF. This work was sponsored by the Office of Naval Research

  1. Computation of solar wind parameters from the OGO-5 plasma spectrometer data using Hermite polynomials

    NASA Technical Reports Server (NTRS)

    Neugebauer, M.

    1971-01-01

    The method used to calculate the velocity, temperature, and density of the solar wind plasma is presented from spectra obtained by attitude-stabilized plasma detectors on the earth satellite OGO 5. The method, which used expansions in terms of Hermite polynomials, is very inexpensive to implement on an electronic computer compared to the least-squares and other iterative methods often used for similar problems.

  2. Polynomial expansions of single-mode motions around equilibrium points in the circular restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Lei, Hanlun; Xu, Bo; Circi, Christian

    2018-05-01

    In this work, the single-mode motions around the collinear and triangular libration points in the circular restricted three-body problem are studied. To describe these motions, we adopt an invariant manifold approach, which states that a suitable pair of independent variables are taken as modal coordinates and the remaining state variables are expressed as polynomial series of them. Based on the invariant manifold approach, the general procedure on constructing polynomial expansions up to a certain order is outlined. Taking the Earth-Moon system as the example dynamical model, we construct the polynomial expansions up to the tenth order for the single-mode motions around collinear libration points, and up to order eight and six for the planar and vertical-periodic motions around triangular libration point, respectively. The application of the polynomial expansions constructed lies in that they can be used to determine the initial states for the single-mode motions around equilibrium points. To check the validity, the accuracy of initial states determined by the polynomial expansions is evaluated.

  3. The s-Ordered Fock Space Projectors Gained by the General Ordering Theorem

    NASA Astrophysics Data System (ADS)

    Farid, Shähandeh; Mohammad, Reza Bazrafkan; Mahmoud, Ashrafi

    2012-09-01

    Employing the general ordering theorem (GOT), operational methods and incomplete 2-D Hermite polynomials, we derive the t-ordered expansion of Fock space projectors. Using the result, the general ordered form of the coherent state projectors is obtained. This indeed gives a new integration formula regarding incomplete 2-D Hermite polynomials. In addition, the orthogonality relation of the incomplete 2-D Hermite polynomials is derived to resolve Dattoli's failure.

  4. On the coefficients of differentiated expansions of ultraspherical polynomials

    NASA Technical Reports Server (NTRS)

    Karageorghis, Andreas; Phillips, Timothy N.

    1989-01-01

    A formula expressing the coefficients of an expression of ultraspherical polynomials which has been differentiated an arbitrary number of times in terms of the coefficients of the original expansion is proved. The particular examples of Chebyshev and Legendre polynomials are considered.

  5. Interbasis expansions in the Zernike system

    NASA Astrophysics Data System (ADS)

    Atakishiyev, Natig M.; Pogosyan, George S.; Wolf, Kurt Bernardo; Yakhno, Alexander

    2017-10-01

    The differential equation with free boundary conditions on the unit disk that was proposed by Frits Zernike in 1934 to find Jacobi polynomial solutions (indicated as I) serves to define a classical system and a quantum system which have been found to be superintegrable. We have determined two new orthogonal polynomial solutions (indicated as II and III) that are separable and involve Legendre and Gegenbauer polynomials. Here we report on their three interbasis expansion coefficients: between the I-II and I-III bases, they are given by F32(⋯|1 ) polynomials that are also special su(2) Clebsch-Gordan coefficients and Hahn polynomials. Between the II-III bases, we find an expansion expressed by F43(⋯|1 ) 's and Racah polynomials that are related to the Wigner 6j coefficients.

  6. Polynomial modal analysis of slanted lamellar gratings.

    PubMed

    Granet, Gérard; Randriamihaja, Manjakavola Honore; Raniriharinosy, Karyl

    2017-06-01

    The problem of diffraction by slanted lamellar dielectric and metallic gratings in classical mounting is formulated as an eigenvalue eigenvector problem. The numerical solution is obtained by using the moment method with Legendre polynomials as expansion and test functions, which allows us to enforce in an exact manner the boundary conditions which determine the eigensolutions. Our method is successfully validated by comparison with other methods including in the case of highly slanted gratings.

  7. Image distortion analysis using polynomial series expansion.

    PubMed

    Baggenstoss, Paul M

    2004-11-01

    In this paper, we derive a technique for analysis of local distortions which affect data in real-world applications. In the paper, we focus on image data, specifically handwritten characters. Given a reference image and a distorted copy of it, the method is able to efficiently determine the rotations, translations, scaling, and any other distortions that have been applied. Because the method is robust, it is also able to estimate distortions for two unrelated images, thus determining the distortions that would be required to cause the two images to resemble each other. The approach is based on a polynomial series expansion using matrix powers of linear transformation matrices. The technique has applications in pattern recognition in the presence of distortions.

  8. Solution of the mean spherical approximation for polydisperse multi-Yukawa hard-sphere fluid mixture using orthogonal polynomial expansions

    NASA Astrophysics Data System (ADS)

    Kalyuzhnyi, Yurij V.; Cummings, Peter T.

    2006-03-01

    The Blum-Høye [J. Stat. Phys. 19 317 (1978)] solution of the mean spherical approximation for a multicomponent multi-Yukawa hard-sphere fluid is extended to a polydisperse multi-Yukawa hard-sphere fluid. Our extension is based on the application of the orthogonal polynomial expansion method of Lado [Phys. Rev. E 54, 4411 (1996)]. Closed form analytical expressions for the structural and thermodynamic properties of the model are presented. They are given in terms of the parameters that follow directly from the solution. By way of illustration the method of solution is applied to describe the thermodynamic properties of the one- and two-Yukawa versions of the model.

  9. Partial-fraction expansion and inverse Laplace transform of a rational function with real coefficients

    NASA Technical Reports Server (NTRS)

    Chang, F.-C.; Mott, H.

    1974-01-01

    This paper presents a technique for the partial-fraction expansion of functions which are ratios of polynomials with real coefficients. The expansion coefficients are determined by writing the polynomials as Taylor's series and obtaining the Laurent series expansion of the function. The general formula for the inverse Laplace transform is also derived.

  10. On the coefficients of integrated expansions of Bessel polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Ahmed, H. M.

    2006-03-01

    A new formula expressing explicitly the integrals of Bessel polynomials of any degree and for any order in terms of the Bessel polynomials themselves is proved. Another new explicit formula relating the Bessel coefficients of an expansion for infinitely differentiable function that has been integrated an arbitrary number of times in terms of the coefficients of the original expansion of the function is also established. An application of these formulae for solving ordinary differential equations with varying coefficients is discussed.

  11. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less

  12. Best uniform approximation to a class of rational functions

    NASA Astrophysics Data System (ADS)

    Zheng, Zhitong; Yong, Jun-Hai

    2007-10-01

    We explicitly determine the best uniform polynomial approximation to a class of rational functions of the form 1/(x-c)2+K(a,b,c,n)/(x-c) on [a,b] represented by their Chebyshev expansion, where a, b, and c are real numbers, n-1 denotes the degree of the best approximating polynomial, and K is a constant determined by a, b, c, and n. Our result is based on the explicit determination of a phase angle [eta] in the representation of the approximation error by a trigonometric function. Moreover, we formulate an ansatz which offers a heuristic strategies to determine the best approximating polynomial to a function represented by its Chebyshev expansion. Combined with the phase angle method, this ansatz can be used to find the best uniform approximation to some more functions.

  13. Zernike expansion of derivatives and Laplacians of the Zernike circle polynomials.

    PubMed

    Janssen, A J E M

    2014-07-01

    The partial derivatives and Laplacians of the Zernike circle polynomials occur in various places in the literature on computational optics. In a number of cases, the expansion of these derivatives and Laplacians in the circle polynomials are required. For the first-order partial derivatives, analytic results are scattered in the literature. Results start as early as 1942 in Nijboer's thesis and continue until present day, with some emphasis on recursive computation schemes. A brief historic account of these results is given in the present paper. By choosing the unnormalized version of the circle polynomials, with exponential rather than trigonometric azimuthal dependence, and by a proper combination of the two partial derivatives, a concise form of the expressions emerges. This form is appropriate for the formulation and solution of a model wavefront sensing problem of reconstructing a wavefront on the level of its expansion coefficients from (measurements of the expansion coefficients of) the partial derivatives. It turns out that the least-squares estimation problem arising here decouples per azimuthal order m, and per m the generalized inverse solution assumes a concise analytic form so that singular value decompositions are avoided. The preferred version of the circle polynomials, with proper combination of the partial derivatives, also leads to a concise analytic result for the Zernike expansion of the Laplacian of the circle polynomials. From these expansions, the properties of the Laplacian as a mapping from the space of circle polynomials of maximal degree N, as required in the study of the Neumann problem associated with the transport-of-intensity equation, can be read off within a single glance. Furthermore, the inverse of the Laplacian on this space is shown to have a concise analytic form.

  14. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less

  15. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    NASA Astrophysics Data System (ADS)

    Ahlfeld, R.; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.

  16. Selection of Polynomial Chaos Bases via Bayesian Model Uncertainty Methods with Applications to Sparse Approximation of PDEs with Stochastic Inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios; Lin, Guang

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesianmore » model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.« less

  17. An adaptive least-squares global sensitivity method and application to a plasma-coupled combustion prediction with parametric correlation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Massa, Luca; Wang, Jonathan; Freund, Jonathan B.

    2018-05-01

    We introduce an efficient non-intrusive surrogate-based methodology for global sensitivity analysis and uncertainty quantification. Modified covariance-based sensitivity indices (mCov-SI) are defined for outputs that reflect correlated effects. The overall approach is applied to simulations of a complex plasma-coupled combustion system with disparate uncertain parameters in sub-models for chemical kinetics and a laser-induced breakdown ignition seed. The surrogate is based on an Analysis of Variance (ANOVA) expansion, such as widely used in statistics, with orthogonal polynomials representing the ANOVA subspaces and a polynomial dimensional decomposition (PDD) representing its multi-dimensional components. The coefficients of the PDD expansion are obtained using a least-squares regression, which both avoids the direct computation of high-dimensional integrals and affords an attractive flexibility in choosing sampling points. This facilitates importance sampling using a Bayesian calibrated posterior distribution, which is fast and thus particularly advantageous in common practical cases, such as our large-scale demonstration, for which the asymptotic convergence properties of polynomial expansions cannot be realized due to computation expense. Effort, instead, is focused on efficient finite-resolution sampling. Standard covariance-based sensitivity indices (Cov-SI) are employed to account for correlation of the uncertain parameters. Magnitude of Cov-SI is unfortunately unbounded, which can produce extremely large indices that limit their utility. Alternatively, mCov-SI are then proposed in order to bound this magnitude ∈ [ 0 , 1 ]. The polynomial expansion is coupled with an adaptive ANOVA strategy to provide an accurate surrogate as the union of several low-dimensional spaces, avoiding the typical computational cost of a high-dimensional expansion. It is also adaptively simplified according to the relative contribution of the different polynomials to the total variance. The approach is demonstrated for a laser-induced turbulent combustion simulation model, which includes parameters with correlated effects.

  18. Efficient algorithms for construction of recurrence relations for the expansion and connection coefficients in series of Al-Salam Carlitz I polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Ahmed, H. M.

    2005-12-01

    Two formulae expressing explicitly the derivatives and moments of Al-Salam-Carlitz I polynomials of any degree and for any order in terms of Al-Salam-Carlitz I themselves are proved. Two other formulae for the expansion coefficients of general-order derivatives Dpqf(x), and for the moments xellDpqf(x), of an arbitrary function f(x) in terms of its original expansion coefficients are also obtained. Application of these formulae for solving q-difference equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Al-Salam-Carlitz I polynomials and any system of basic hypergeometric orthogonal polynomials, belonging to the q-Hahn class, is described.

  19. The accurate solution of Poisson's equation by expansion in Chebyshev polynomials

    NASA Technical Reports Server (NTRS)

    Haidvogel, D. B.; Zang, T.

    1979-01-01

    A Chebyshev expansion technique is applied to Poisson's equation on a square with homogeneous Dirichlet boundary conditions. The spectral equations are solved in two ways - by alternating direction and by matrix diagonalization methods. Solutions are sought to both oscillatory and mildly singular problems. The accuracy and efficiency of the Chebyshev approach compare favorably with those of standard second- and fourth-order finite-difference methods.

  20. Adaptive polynomial chaos techniques for uncertainty quantification of a gas cooled fast reactor transient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perko, Z.; Gilli, L.; Lathouwers, D.

    2013-07-01

    Uncertainty quantification plays an increasingly important role in the nuclear community, especially with the rise of Best Estimate Plus Uncertainty methodologies. Sensitivity analysis, surrogate models, Monte Carlo sampling and several other techniques can be used to propagate input uncertainties. In recent years however polynomial chaos expansion has become a popular alternative providing high accuracy at affordable computational cost. This paper presents such polynomial chaos (PC) methods using adaptive sparse grids and adaptive basis set construction, together with an application to a Gas Cooled Fast Reactor transient. Comparison is made between a new sparse grid algorithm and the traditionally used techniquemore » proposed by Gerstner. An adaptive basis construction method is also introduced and is proved to be advantageous both from an accuracy and a computational point of view. As a demonstration the uncertainty quantification of a 50% loss of flow transient in the GFR2400 Gas Cooled Fast Reactor design was performed using the CATHARE code system. The results are compared to direct Monte Carlo sampling and show the superior convergence and high accuracy of the polynomial chaos expansion. Since PC techniques are easy to implement, they can offer an attractive alternative to traditional techniques for the uncertainty quantification of large scale problems. (authors)« less

  1. Recurrences and explicit formulae for the expansion and connection coefficients in series of Bessel polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Ahmed, H. M.

    2004-08-01

    A formula expressing explicitly the derivatives of Bessel polynomials of any degree and for any order in terms of the Bessel polynomials themselves is proved. Another explicit formula, which expresses the Bessel expansion coefficients of a general-order derivative of an infinitely differentiable function in terms of its original Bessel coefficients, is also given. A formula for the Bessel coefficients of the moments of one single Bessel polynomial of certain degree is proved. A formula for the Bessel coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Bessel coefficients is also obtained. Application of these formulae for solving ordinary differential equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Bessel-Bessel polynomials is described. An explicit formula for these coefficients between Jacobi and Bessel polynomials is given, of which the ultraspherical polynomial and its consequences are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Bessel and Hermite-Bessel are also developed.

  2. Spectral/ hp element methods: Recent developments, applications, and perspectives

    NASA Astrophysics Data System (ADS)

    Xu, Hui; Cantwell, Chris D.; Monteserin, Carlos; Eskilsson, Claes; Engsig-Karup, Allan P.; Sherwin, Spencer J.

    2018-02-01

    The spectral/ hp element method combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate a C 0 - continuous expansion. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/ hp element method and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use of the spectral/ hp element method in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/ hp element method in more complex science and engineering applications are discussed.

  3. Modeling Uncertainty in Steady State Diffusion Problems via Generalized Polynomial Chaos

    DTIC Science & Technology

    2002-07-25

    Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc., AMS... orthogonal polynomial functionals from the Askey scheme, as a generalization of the original polynomial chaos idea of Wiener (1938). A Galerkin projection...1) by generalized polynomial chaos expansion, where the uncertainties can be introduced through κ, f , or g, or some combinations. It is worth

  4. A conforming spectral collocation strategy for Stokes flow through a channel contraction

    NASA Technical Reports Server (NTRS)

    Phillips, Timothy N.; Karageorghis, Andreas

    1989-01-01

    A formula expressing the coefficients of an expansion of ultraspherical polynomials which has been differentiated an arbitrary number of times in terms of the coefficients of the original expansion is proved. The particular examples of Chebyshev and Legendre polynomials are considered.

  5. On the construction of recurrence relations for the expansion and connection coefficients in series of Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2004-01-01

    Formulae expressing explicitly the Jacobi coefficients of a general-order derivative (integral) of an infinitely differentiable function in terms of its original expansion coefficients, and formulae for the derivatives (integrals) of Jacobi polynomials in terms of Jacobi polynomials themselves are stated. A formula for the Jacobi coefficients of the moments of one single Jacobi polynomial of certain degree is proved. Another formula for the Jacobi coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its original expanded coefficients is also given. A simple approach in order to construct and solve recursively for the connection coefficients between Jacobi-Jacobi polynomials is described. Explicit formulae for these coefficients between ultraspherical and Jacobi polynomials are deduced, of which the Chebyshev polynomials of the first and second kinds and Legendre polynomials are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Jacobi and Hermite-Jacobi are developed.

  6. Tolerance analysis of optical telescopes using coherent addition of wavefront errors

    NASA Technical Reports Server (NTRS)

    Davenport, J. W.

    1982-01-01

    A near diffraction-limited telescope requires that tolerance analysis be done on the basis of system wavefront error. One method of analyzing the wavefront error is to represent the wavefront error function in terms of its Zernike polynomial expansion. A Ramsey-Korsch ray trace package, a computer program that simulates the tracing of rays through an optical telescope system, was expanded to include the Zernike polynomial expansion up through the fifth-order spherical term. An option to determine a 3 dimensional plot of the wavefront error function was also included in the Ramsey-Korsch package. Several assimulation runs were analyzed to determine the particular set of coefficients in the Zernike expansion that are effected by various errors such as tilt, decenter and despace. A 3 dimensional plot of each error up through the fifth-order spherical term was also included in the study. Tolerance analysis data are presented.

  7. A range-free method to determine antoine vapor-pressure heat transfer-related equation coefficients using the Boubaker polynomial expansion scheme

    NASA Astrophysics Data System (ADS)

    Koçak, H.; Dahong, Z.; Yildirim, A.

    2011-05-01

    In this study, a range-free method is proposed in order to determine the Antoine constants for a given material (salicylic acid). The advantage of this method is mainly yielding analytical expressions which fit different temperature ranges.

  8. The Gibbs Phenomenon for Series of Orthogonal Polynomials

    ERIC Educational Resources Information Center

    Fay, T. H.; Kloppers, P. Hendrik

    2006-01-01

    This note considers the four classes of orthogonal polynomials--Chebyshev, Hermite, Laguerre, Legendre--and investigates the Gibbs phenomenon at a jump discontinuity for the corresponding orthogonal polynomial series expansions. The perhaps unexpected thing is that the Gibbs constant that arises for each class of polynomials appears to be the same…

  9. Least squares polynomial chaos expansion: A review of sampling strategies

    NASA Astrophysics Data System (ADS)

    Hadigol, Mohammad; Doostan, Alireza

    2018-04-01

    As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.

  10. Generic expansion of the Jastrow correlation factor in polynomials satisfying symmetry and cusp conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lüchow, Arne, E-mail: luechow@rwth-aachen.de; Jülich Aachen Research Alliance; Sturm, Alexander

    2015-02-28

    Jastrow correlation factors play an important role in quantum Monte Carlo calculations. Together with an orbital based antisymmetric function, they allow the construction of highly accurate correlation wave functions. In this paper, a generic expansion of the Jastrow correlation function in terms of polynomials that satisfy both the electron exchange symmetry constraint and the cusp conditions is presented. In particular, an expansion of the three-body electron-electron-nucleus contribution in terms of cuspless homogeneous symmetric polynomials is proposed. The polynomials can be expressed in fairly arbitrary scaling function allowing a generic implementation of the Jastrow factor. It is demonstrated with a fewmore » examples that the new Jastrow factor achieves 85%–90% of the total correlation energy in a variational quantum Monte Carlo calculation and more than 90% of the diffusion Monte Carlo correlation energy.« less

  11. A robust and efficient stepwise regression method for building sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abraham, Simon, E-mail: Simon.Abraham@ulb.ac.be; Raisee, Mehrdad; Ghorbaniasl, Ghader

    2017-03-01

    Polynomial Chaos (PC) expansions are widely used in various engineering fields for quantifying uncertainties arising from uncertain parameters. The computational cost of classical PC solution schemes is unaffordable as the number of deterministic simulations to be calculated grows dramatically with the number of stochastic dimension. This considerably restricts the practical use of PC at the industrial level. A common approach to address such problems is to make use of sparse PC expansions. This paper presents a non-intrusive regression-based method for building sparse PC expansions. The most important PC contributions are detected sequentially through an automatic search procedure. The variable selectionmore » criterion is based on efficient tools relevant to probabilistic method. Two benchmark analytical functions are used to validate the proposed algorithm. The computational efficiency of the method is then illustrated by a more realistic CFD application, consisting of the non-deterministic flow around a transonic airfoil subject to geometrical uncertainties. To assess the performance of the developed methodology, a detailed comparison is made with the well established LAR-based selection technique. The results show that the developed sparse regression technique is able to identify the most significant PC contributions describing the problem. Moreover, the most important stochastic features are captured at a reduced computational cost compared to the LAR method. The results also demonstrate the superior robustness of the method by repeating the analyses using random experimental designs.« less

  12. An Analysis of Polynomial Chaos Approximations for Modeling Single-Fluid-Phase Flow in Porous Medium Systems

    PubMed Central

    Rupert, C.P.; Miller, C.T.

    2008-01-01

    We examine a variety of polynomial-chaos-motivated approximations to a stochastic form of a steady state groundwater flow model. We consider approaches for truncating the infinite dimensional problem and producing decoupled systems. We discuss conditions under which such decoupling is possible and show that to generalize the known decoupling by numerical cubature, it would be necessary to find new multivariate cubature rules. Finally, we use the acceleration of Monte Carlo to compare the quality of polynomial models obtained for all approaches and find that in general the methods considered are more efficient than Monte Carlo for the relatively small domains considered in this work. A curse of dimensionality in the series expansion of the log-normal stochastic random field used to represent hydraulic conductivity provides a significant impediment to efficient approximations for large domains for all methods considered in this work, other than the Monte Carlo method. PMID:18836519

  13. Dynamic response analysis of structure under time-variant interval process model

    NASA Astrophysics Data System (ADS)

    Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao

    2016-10-01

    Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  15. Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quanti cation analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several com- pressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers of l1 ls, SpaRSA, CGIST, FPC AS, and ADMM, we develop techniques to mitigate over tting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendationsmore » on parameter settings for these tech- niques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-cross flow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy and computational tradeoffs between polynomial bases of different degrees, and practi- cability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.« less

  16. Quantum and electromagnetic propagation with the conjugate symmetric Lanczos method.

    PubMed

    Acevedo, Ramiro; Lombardini, Richard; Turner, Matthew A; Kinsey, James L; Johnson, Bruce R

    2008-02-14

    The conjugate symmetric Lanczos (CSL) method is introduced for the solution of the time-dependent Schrodinger equation. This remarkably simple and efficient time-domain algorithm is a low-order polynomial expansion of the quantum propagator for time-independent Hamiltonians and derives from the time-reversal symmetry of the Schrodinger equation. The CSL algorithm gives forward solutions by simply complex conjugating backward polynomial expansion coefficients. Interestingly, the expansion coefficients are the same for each uniform time step, a fact that is only spoiled by basis incompleteness and finite precision. This is true for the Krylov basis and, with further investigation, is also found to be true for the Lanczos basis, important for efficient orthogonal projection-based algorithms. The CSL method errors roughly track those of the short iterative Lanczos method while requiring fewer matrix-vector products than the Chebyshev method. With the CSL method, only a few vectors need to be stored at a time, there is no need to estimate the Hamiltonian spectral range, and only matrix-vector and vector-vector products are required. Applications using localized wavelet bases are made to harmonic oscillator and anharmonic Morse oscillator systems as well as electrodynamic pulse propagation using the Hamiltonian form of Maxwell's equations. For gold with a Drude dielectric function, the latter is non-Hermitian, requiring consideration of corrections to the CSL algorithm.

  17. Uncertainty analysis for the steady-state flows in a dual throat nozzle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Q.-Y.; Gottlieb, David; Hesthaven, Jan S.

    2005-03-20

    It is well known that the steady state of an isentropic flow in a dual-throat nozzle with equal throat areas is not unique. In particular there is a possibility that the flow contains a shock wave, whose location is determined solely by the initial condition. In this paper, we consider cases with uncertainty in this initial condition and use generalized polynomial chaos methods to study the steady-state solutions for stochastic initial conditions. Special interest is given to the statistics of the shock location. The polynomial chaos (PC) expansion modes are shown to be smooth functions of the spatial variable x,more » although each solution realization is discontinuous in the spatial variable x. When the variance of the initial condition is small, the probability density function of the shock location is computed with high accuracy. Otherwise, many terms are needed in the PC expansion to produce reasonable results due to the slow convergence of the PC expansion, caused by non-smoothness in random space.« less

  18. Limitations of polynomial chaos expansions in the Bayesian solution of inverse problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Fei; Department of Mathematics, University of California, Berkeley; Morzfeld, Matthias, E-mail: mmo@math.lbl.gov

    2015-02-01

    Polynomial chaos expansions are used to reduce the computational cost in the Bayesian solutions of inverse problems by creating a surrogate posterior that can be evaluated inexpensively. We show, by analysis and example, that when the data contain significant information beyond what is assumed in the prior, the surrogate posterior can be very different from the posterior, and the resulting estimates become inaccurate. One can improve the accuracy by adaptively increasing the order of the polynomial chaos, but the cost may increase too fast for this to be cost effective compared to Monte Carlo sampling without a surrogate posterior.

  19. Theoretical study on the dispersion curves of Lamb waves in piezoelectric-semiconductor sandwich plates GaAs-FGPM-AlAs: Legendre polynomial series expansion

    NASA Astrophysics Data System (ADS)

    Othmani, Cherif; Takali, Farid; Njeh, Anouar

    2017-06-01

    In this paper, the propagation of the Lamb waves in the GaAs-FGPM-AlAs sandwich plate is studied. Based on the orthogonal function, Legendre polynomial series expansion is applied along the thickness direction to obtain the Lamb dispersion curves. The convergence and accuracy of this polynomial method are discussed. In addition, the influences of the volume fraction p and thickness hFGPM of the FGPM middle layer on the Lamb dispersion curves are developed. The numerical results also show differences between the characteristics of Lamb dispersion curves in the sandwich plate for various gradient coefficients of the FGPM middle layer. In fact, if the volume fraction p increases the phase velocity will increases and the number of modes will decreases at a given frequency range. All the developments performed in this paper were implemented in Matlab software. The corresponding results presented in this work may have important applications in several industry areas and developing novel acoustic devices such as sensors, electromechanical transducers, actuators and filters.

  20. Direct discriminant locality preserving projection with Hammerstein polynomial expansion.

    PubMed

    Chen, Xi; Zhang, Jiashu; Li, Defang

    2012-12-01

    Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.

  1. An Efficient numerical method to calculate the conductivity tensor for disordered topological matter

    NASA Astrophysics Data System (ADS)

    Garcia, Jose H.; Covaci, Lucian; Rappoport, Tatiana G.

    2015-03-01

    We propose a new efficient numerical approach to calculate the conductivity tensor in solids. We use a real-space implementation of the Kubo formalism where both diagonal and off-diagonal conductivities are treated in the same footing. We adopt a formulation of the Kubo theory that is known as Bastin formula and expand the Green's functions involved in terms of Chebyshev polynomials using the kernel polynomial method. Within this method, all the computational effort is on the calculation of the expansion coefficients. It also has the advantage of obtaining both conductivities in a single calculation step and for various values of temperature and chemical potential, capturing the topology of the band-structure. Our numerical technique is very general and is suitable for the calculation of transport properties of disordered systems. We analyze how the method's accuracy varies with the number of moments used in the expansion and illustrate our approach by calculating the transverse conductivity of different topological systems. T.G.R, J.H.G and L.C. acknowledge Brazilian agencies CNPq, FAPERJ and INCT de Nanoestruturas de Carbono, Flemish Science Foundation for financial support.

  2. Conformal Galilei algebras, symmetric polynomials and singular vectors

    NASA Astrophysics Data System (ADS)

    Křižka, Libor; Somberg, Petr

    2018-01-01

    We classify and explicitly describe homomorphisms of Verma modules for conformal Galilei algebras cga_ℓ (d,C) with d=1 for any integer value ℓ \\in N. The homomorphisms are uniquely determined by singular vectors as solutions of certain differential operators of flag type and identified with specific polynomials arising as coefficients in the expansion of a parametric family of symmetric polynomials into power sum symmetric polynomials.

  3. Translation of Bernstein Coefficients Under an Affine Mapping of the Unit Interval

    NASA Technical Reports Server (NTRS)

    Alford, John A., II

    2012-01-01

    We derive an expression connecting the coefficients of a polynomial expanded in the Bernstein basis to the coefficients of an equivalent expansion of the polynomial under an affine mapping of the domain. The expression may be useful in the calculation of bounds for multi-variate polynomials.

  4. Wilson polynomials/functions and intertwining operators for the generic quantum superintegrable system on the 2-sphere

    NASA Astrophysics Data System (ADS)

    Miller, W., Jr.; Li, Q.

    2015-04-01

    The Wilson and Racah polynomials can be characterized as basis functions for irreducible representations of the quadratic symmetry algebra of the quantum superintegrable system on the 2-sphere, HΨ = EΨ, with generic 3-parameter potential. Clearly, the polynomials are expansion coefficients for one eigenbasis of a symmetry operator L2 of H in terms of an eigenbasis of another symmetry operator L1, but the exact relationship appears not to have been made explicit. We work out the details of the expansion to show, explicitly, how the polynomials arise and how the principal properties of these functions: the measure, 3-term recurrence relation, 2nd order difference equation, duality of these relations, permutation symmetry, intertwining operators and an alternate derivation of Wilson functions - follow from the symmetry of this quantum system. This paper is an exercise to show that quantum mechancal concepts and recurrence relations for Gausian hypergeometrc functions alone suffice to explain these properties; we make no assumptions about the structure of Wilson polynomial/functions, but derive them from quantum principles. There is active interest in the relation between multivariable Wilson polynomials and the quantum superintegrable system on the n-sphere with generic potential, and these results should aid in the generalization. Contracting function space realizations of irreducible representations of this quadratic algebra to the other superintegrable systems one can obtain the full Askey scheme of orthogonal hypergeometric polynomials. All of these contractions of superintegrable systems with potential are uniquely induced by Wigner Lie algebra contractions of so(3, C) and e(2,C). All of the polynomials produced are interpretable as quantum expansion coefficients. It is important to extend this process to higher dimensions.

  5. Mathematics of Zernike polynomials: a review.

    PubMed

    McAlinden, Colm; McCartney, Mark; Moore, Jonathan

    2011-11-01

    Monochromatic aberrations of the eye principally originate from the cornea and the crystalline lens. Aberrometers operate via differing principles but function by either analysing the reflected wavefront from the retina or by analysing an image on the retina. Aberrations may be described as lower order or higher order aberrations with Zernike polynomials being the most commonly employed fitting method. The complex mathematical aspects with regards the Zernike polynomial expansion series are detailed in this review. Refractive surgery has been a key clinical application of aberrometers; however, more recently aberrometers have been used in a range of other areas ophthalmology including corneal diseases, cataract and retinal imaging. © 2011 The Authors. Clinical and Experimental Ophthalmology © 2011 Royal Australian and New Zealand College of Ophthalmologists.

  6. Factorization of differential expansion for non-rectangular representations

    NASA Astrophysics Data System (ADS)

    Morozov, A.

    2018-04-01

    Factorization of the differential expansion (DE) coefficients for colored HOMFLY-PT polynomials of antiparallel double braids, originally discovered for rectangular representations R, in the case of rectangular representations R, is extended to the first non-rectangular representations R = [2, 1] and R = [3, 1]. This increases chances that such factorization will take place for generic R, thus fixing the shape of the DE. We illustrate the power of the method by conjecturing the DE-induced expression for double-braid polynomials for all R = [r, 1]. In variance with the rectangular case, the knowledge for double braids is not fully sufficient to deduce the exclusive Racah matrix S¯ — the entries in the sectors with nontrivial multiplicities sum up and remain unseparated. Still, a considerable piece of the matrix is extracted directly and its other elements can be found by solving the unitarity constraints.

  7. Canonical partition functions: ideal quantum gases, interacting classical gases, and interacting quantum gases

    NASA Astrophysics Data System (ADS)

    Zhou, Chi-Chun; Dai, Wu-Sheng

    2018-02-01

    In statistical mechanics, for a system with a fixed number of particles, e.g. a finite-size system, strictly speaking, the thermodynamic quantity needs to be calculated in the canonical ensemble. Nevertheless, the calculation of the canonical partition function is difficult. In this paper, based on the mathematical theory of the symmetric function, we suggest a method for the calculation of the canonical partition function of ideal quantum gases, including ideal Bose, Fermi, and Gentile gases. Moreover, we express the canonical partition functions of interacting classical and quantum gases given by the classical and quantum cluster expansion methods in terms of the Bell polynomial in mathematics. The virial coefficients of ideal Bose, Fermi, and Gentile gases are calculated from the exact canonical partition function. The virial coefficients of interacting classical and quantum gases are calculated from the canonical partition function by using the expansion of the Bell polynomial, rather than calculated from the grand canonical potential.

  8. Error estimates of Lagrange interpolation and orthonormal expansions for Freud weights

    NASA Astrophysics Data System (ADS)

    Kwon, K. H.; Lee, D. W.

    2001-08-01

    Let Sn[f] be the nth partial sum of the orthonormal polynomials expansion with respect to a Freud weight. Then we obtain sufficient conditions for the boundedness of Sn[f] and discuss the speed of the convergence of Sn[f] in weighted Lp space. We also find sufficient conditions for the boundedness of the Lagrange interpolation polynomial Ln[f], whose nodal points are the zeros of orthonormal polynomials with respect to a Freud weight. In particular, if W(x)=e-(1/2)x2 is the Hermite weight function, then we obtain sufficient conditions for the inequalities to hold:andwhere and k=0,1,2...,r.

  9. Selection of polynomial chaos bases via Bayesian model uncertainty methods with applications to sparse approximation of PDEs with stochastic inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios, E-mail: georgios.karagiannis@pnnl.gov; Lin, Guang, E-mail: guang.lin@pnnl.gov

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, bymore » coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.« less

  10. Analytic Evolution of Singular Distribution Amplitudes in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tandogan Kunkel, Asli

    2014-08-01

    Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less

  11. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  12. Application of the polynomial chaos expansion to approximate the homogenised response of the intervertebral disc.

    PubMed

    Karajan, N; Otto, D; Oladyshkin, S; Ehlers, W

    2014-10-01

    A possibility to simulate the mechanical behaviour of the human spine is given by modelling the stiffer structures, i.e. the vertebrae, as a discrete multi-body system (MBS), whereas the softer connecting tissue, i.e. the softer intervertebral discs (IVD), is represented in a continuum-mechanical sense using the finite-element method (FEM). From a modelling point of view, the mechanical behaviour of the IVD can be included into the MBS in two different ways. They can either be computed online in a so-called co-simulation of a MBS and a FEM or offline in a pre-computation step, where a representation of the discrete mechanical response of the IVD needs to be defined in terms of the applied degrees of freedom (DOF) of the MBS. For both methods, an appropriate homogenisation step needs to be applied to obtain the discrete mechanical response of the IVD, i.e. the resulting forces and moments. The goal of this paper was to present an efficient method to approximate the mechanical response of an IVD in an offline computation. In a previous paper (Karajan et al. in Biomech Model Mechanobiol 12(3):453-466, 2012), it was proven that a cubic polynomial for the homogenised forces and moments of the FE model is a suitable choice to approximate the purely elastic response as a coupled function of the DOF of the MBS. In this contribution, the polynomial chaos expansion (PCE) is applied to generate these high-dimensional polynomials. Following this, the main challenge is to determine suitable deformation states of the IVD for pre-computation, such that the polynomials can be constructed with high accuracy and low numerical cost. For the sake of a simple verification, the coupling method and the PCE are applied to the same simplified motion segment of the spine as was used in the previous paper, i.e. two cylindrical vertebrae and a cylindrical IVD in between. In a next step, the loading rates are included as variables in the polynomial response functions to account for a more realistic response of the overall viscoelastic intervertebral disc. Herein, an additive split into elastic and inelastic contributions to the homogenised forces and moments is applied.

  13. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com; Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux; ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representationmore » of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.« less

  14. On Complicated Expansions of Solutions to ODES

    NASA Astrophysics Data System (ADS)

    Bruno, A. D.

    2018-03-01

    Polynomial ordinary differential equations are studied by asymptotic methods. The truncated equation associated with a vertex or a nonhorizontal edge of their polygon of the initial equation is assumed to have a solution containing the logarithm of the independent variable. It is shown that, under very weak constraints, this nonpower asymptotic form of solutions to the original equation can be extended to an asymptotic expansion of these solutions. This is an expansion in powers of the independent variable with coefficients being Laurent series in decreasing powers of the logarithm. Such expansions are sometimes called psi-series. Algorithms for such computations are described. Six examples are given. Four of them are concern with Painlevé equations. An unexpected property of these expansions is revealed.

  15. Accurate Gaussian basis sets for atomic and molecular calculations obtained from the generator coordinate method with polynomial discretization.

    PubMed

    Celeste, Ricardo; Maringolo, Milena P; Comar, Moacyr; Viana, Rommel B; Guimarães, Amanda R; Haiduke, Roberto L A; da Silva, Albérico B F

    2015-10-01

    Accurate Gaussian basis sets for atoms from H to Ba were obtained by means of the generator coordinate Hartree-Fock (GCHF) method based on a polynomial expansion to discretize the Griffin-Wheeler-Hartree-Fock equations (GWHF). The discretization of the GWHF equations in this procedure is based on a mesh of points not equally distributed in contrast with the original GCHF method. The results of atomic Hartree-Fock energies demonstrate the capability of these polynomial expansions in designing compact and accurate basis sets to be used in molecular calculations and the maximum error found when compared to numerical values is only 0.788 mHartree for indium. Some test calculations with the B3LYP exchange-correlation functional for N2, F2, CO, NO, HF, and HCN show that total energies within 1.0 to 2.4 mHartree compared to the cc-pV5Z basis sets are attained with our contracted bases with a much smaller number of polarization functions (2p1d and 2d1f for hydrogen and heavier atoms, respectively). Other molecular calculations performed here are also in very good accordance with experimental and cc-pV5Z results. The most important point to be mentioned here is that our generator coordinate basis sets required only a tiny fraction of the computational time when compared to B3LYP/cc-pV5Z calculations.

  16. Recursive formulas for the partial fraction expansion of a rational function with multiple poles.

    NASA Technical Reports Server (NTRS)

    Chang, F.-C.

    1973-01-01

    The coefficients in the partial fraction expansion considered are given by Heaviside's formula. The evaluation of the coefficients involves the differential of a quotient of two polynomials. A simplified approach for the evaluation of the coefficients is discussed. Leibniz rule is applied and a recurrence formula is derived. A coefficient can also be determined from a system of simultaneous equations. Practical methods for the performance of the computational operations involved in both approaches are considered.

  17. Polynomial chaos expansion with random and fuzzy variables

    NASA Astrophysics Data System (ADS)

    Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.

    2016-06-01

    A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.

  18. Polynomial Similarity Transformation Theory: A smooth interpolation between coupled cluster doubles and projected BCS applied to the reduced BCS Hamiltonian

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degroote, M.; Henderson, T. M.; Zhao, J.

    We present a similarity transformation theory based on a polynomial form of a particle-hole pair excitation operator. In the weakly correlated limit, this polynomial becomes an exponential, leading to coupled cluster doubles. In the opposite strongly correlated limit, the polynomial becomes an extended Bessel expansion and yields the projected BCS wavefunction. In between, we interpolate using a single parameter. The e ective Hamiltonian is non-hermitian and this Polynomial Similarity Transformation Theory follows the philosophy of traditional coupled cluster, left projecting the transformed Hamiltonian onto subspaces of the Hilbert space in which the wave function variance is forced to be zero.more » Similarly, the interpolation parameter is obtained through minimizing the next residual in the projective hierarchy. We rationalize and demonstrate how and why coupled cluster doubles is ill suited to the strongly correlated limit whereas the Bessel expansion remains well behaved. The model provides accurate wave functions with energy errors that in its best variant are smaller than 1% across all interaction stengths. The numerical cost is polynomial in system size and the theory can be straightforwardly applied to any realistic Hamiltonian.« less

  19. Spectral likelihood expansions for Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nagel, Joseph B.; Sudret, Bruno

    2016-03-01

    A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.

  20. Accurate spectral solutions for the parabolic and elliptic partial differential equations by the ultraspherical tau method

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Abd-Elhameed, W. M.

    2005-09-01

    We present a double ultraspherical spectral methods that allow the efficient approximate solution for the parabolic partial differential equations in a square subject to the most general inhomogeneous mixed boundary conditions. The differential equations with their boundary and initial conditions are reduced to systems of ordinary differential equations for the time-dependent expansion coefficients. These systems are greatly simplified by using tensor matrix algebra, and are solved by using the step-by-step method. Numerical applications of how to use these methods are described. Numerical results obtained compare favorably with those of the analytical solutions. Accurate double ultraspherical spectral approximations for Poisson's and Helmholtz's equations are also noted. Numerical experiments show that spectral approximation based on Chebyshev polynomials of the first kind is not always better than others based on ultraspherical polynomials.

  1. The time-fractional radiative transport equation—Continuous-time random walk, diffusion approximation, and Legendre-polynomial expansion

    NASA Astrophysics Data System (ADS)

    Machida, Manabu

    2017-01-01

    We consider the radiative transport equation in which the time derivative is replaced by the Caputo derivative. Such fractional-order derivatives are related to anomalous transport and anomalous diffusion. In this paper we describe how the time-fractional radiative transport equation is obtained from continuous-time random walk and see how the equation is related to the time-fractional diffusion equation in the asymptotic limit. Then we solve the equation with Legendre-polynomial expansion.

  2. High-Order Polynomial Expansions (HOPE) for flux-vector splitting

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Chris J., Jr.

    1991-01-01

    The Van Leer flux splitting is known to produce excessive numerical dissipation for Navier-Stokes calculations. Researchers attempt to remedy this deficiency by introducing a higher order polynomial expansion (HOPE) for the mass flux. In addition to Van Leer's splitting, a term is introduced so that the mass diffusion error vanishes at M = 0. Several splittings for pressure are proposed and examined. The effectiveness of the HOPE scheme is illustrated for 1-D hypersonic conical viscous flow and 2-D supersonic shock-wave boundary layer interactions.

  3. An Efficient Spectral Method for Ordinary Differential Equations with Rational Function Coefficients

    NASA Technical Reports Server (NTRS)

    Coutsias, Evangelos A.; Torres, David; Hagstrom, Thomas

    1994-01-01

    We present some relations that allow the efficient approximate inversion of linear differential operators with rational function coefficients. We employ expansions in terms of a large class of orthogonal polynomial families, including all the classical orthogonal polynomials. These families obey a simple three-term recurrence relation for differentiation, which implies that on an appropriately restricted domain the differentiation operator has a unique banded inverse. The inverse is an integration operator for the family, and it is simply the tridiagonal coefficient matrix for the recurrence. Since in these families convolution operators (i.e. matrix representations of multiplication by a function) are banded for polynomials, we are able to obtain a banded representation for linear differential operators with rational coefficients. This leads to a method of solution of initial or boundary value problems that, besides having an operation count that scales linearly with the order of truncation N, is computationally well conditioned. Among the applications considered is the use of rational maps for the resolution of sharp interior layers.

  4. Polynomial meta-models with canonical low-rank approximations: Numerical insights and comparison to sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konakli, Katerina, E-mail: konakli@ibk.baug.ethz.ch; Sudret, Bruno

    2016-09-15

    The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely themore » exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input dimension, a situation that is often encountered in real-life problems. By introducing the conditional generalization error, we further demonstrate that canonical LRA tend to outperform sparse PCE in the prediction of extreme model responses, which is critical in reliability analysis.« less

  5. Polynomial chaos representation of databases on manifolds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu

    2017-04-15

    Characterizing the polynomial chaos expansion (PCE) of a vector-valued random variable with probability distribution concentrated on a manifold is a relevant problem in data-driven settings. The probability distribution of such random vectors is multimodal in general, leading to potentially very slow convergence of the PCE. In this paper, we build on a recent development for estimating and sampling from probabilities concentrated on a diffusion manifold. The proposed methodology constructs a PCE of the random vector together with an associated generator that samples from the target probability distribution which is estimated from data concentrated in the neighborhood of the manifold. Themore » method is robust and remains efficient for high dimension and large datasets. The resulting polynomial chaos construction on manifolds permits the adaptation of many uncertainty quantification and statistical tools to emerging questions motivated by data-driven queries.« less

  6. Zernike Basis to Cartesian Transformations

    NASA Astrophysics Data System (ADS)

    Mathar, R. J.

    2009-12-01

    The radial polynomials of the 2D (circular) and 3D (spherical) Zernike functions are tabulated as powers of the radial distance. The reciprocal tabulation of powers of the radial distance in series of radial polynomials is also given, based on projections that take advantage of the orthogonality of the polynomials over the unit interval. They play a role in the expansion of products of the polynomials into sums, which is demonstrated by some examples. Multiplication of the polynomials by the angular bases (azimuth, polar angle) defines the Zernike functions, for which we derive transformations to and from the Cartesian coordinate system centered at the middle of the circle or sphere.

  7. Heat transfer of phase-change materials in two-dimensional cylindrical coordinates

    NASA Technical Reports Server (NTRS)

    Labdon, M. B.; Guceri, S. I.

    1981-01-01

    Two-dimensional phase-change problem is numerically solved in cylindrical coordinates (r and z) by utilizing two Taylor series expansions for the temperature distributions in the neighborhood of the interface location. These two expansions form two polynomials in r and z directions. For the regions sufficiently away from the interface the temperature field equations are numerically solved in the usual way and the results are coupled with the polynomials. The main advantages of this efficient approach include ability to accept arbitrarily time dependent boundary conditions of all types and arbitrarily specified initial temperature distributions. A modified approach using a single Taylor series expansion in two variables is also suggested.

  8. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    PubMed

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  9. Classical Dynamics of Fullerenes

    NASA Astrophysics Data System (ADS)

    Sławianowski, Jan J.; Kotowski, Romuald K.

    2017-06-01

    The classical mechanics of large molecules and fullerenes is studied. The approach is based on the model of collective motion of these objects. The mixed Lagrangian (material) and Eulerian (space) description of motion is used. In particular, the Green and Cauchy deformation tensors are geometrically defined. The important issue is the group-theoretical approach to describing the affine deformations of the body. The Hamiltonian description of motion based on the Poisson brackets methodology is used. The Lagrange and Hamilton approaches allow us to formulate the mechanics in the canonical form. The method of discretization in analytical continuum theory and in classical dynamics of large molecules and fullerenes enable us to formulate their dynamics in terms of the polynomial expansions of configurations. Another approach is based on the theory of analytical functions and on their approximations by finite-order polynomials. We concentrate on the extremely simplified model of affine deformations or on their higher-order polynomial perturbations.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Kunkun, E-mail: ktg@illinois.edu; Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence; Congedo, Pietro M.

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable formore » real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.« less

  11. A weighted ℓ{sub 1}-minimization approach for sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Ji; Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2014-06-15

    This work proposes a method for sparse polynomial chaos (PC) approximation of high-dimensional stochastic functions based on non-adapted random sampling. We modify the standard ℓ{sub 1}-minimization algorithm, originally proposed in the context of compressive sampling, using a priori information about the decay of the PC coefficients, when available, and refer to the resulting algorithm as weightedℓ{sub 1}-minimization. We provide conditions under which we may guarantee recovery using this weighted scheme. Numerical tests are used to compare the weighted and non-weighted methods for the recovery of solutions to two differential equations with high-dimensional random inputs: a boundary value problem with amore » random elliptic operator and a 2-D thermally driven cavity flow with random boundary condition.« less

  12. Polynomial Chaos decomposition applied to stochastic dosimetry: study of the influence of the magnetic field orientation on the pregnant woman exposure at 50 Hz.

    PubMed

    Liorni, I; Parazzini, M; Fiocchi, S; Guadagnin, V; Ravazzani, P

    2014-01-01

    Polynomial Chaos (PC) is a decomposition method used to build a meta-model, which approximates the unknown response of a model. In this paper the PC method is applied to the stochastic dosimetry to assess the variability of human exposure due to the change of the orientation of the B-field vector respect to the human body. In detail, the analysis of the pregnant woman exposure at 7 months of gestational age is carried out, to build-up a statistical meta-model of the induced electric field for each fetal tissue and in the fetal whole-body by means of the PC expansion as a function of the B-field orientation, considering a uniform exposure at 50 Hz.

  13. On the connection coefficients and recurrence relations arising from expansions in series of Laguerre polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2003-05-01

    A formula expressing the Laguerre coefficients of a general-order derivative of an infinitely differentiable function in terms of its original coefficients is proved, and a formula expressing explicitly the derivatives of Laguerre polynomials of any degree and for any order as a linear combination of suitable Laguerre polynomials is deduced. A formula for the Laguerre coefficients of the moments of one single Laguerre polynomial of certain degree is given. Formulae for the Laguerre coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Laguerre coefficients are also obtained. A simple approach in order to build and solve recursively for the connection coefficients between Jacobi-Laguerre and Hermite-Laguerre polynomials is described. An explicit formula for these coefficients between Jacobi and Laguerre polynomials is given, of which the ultra-spherical polynomials of the first and second kinds and Legendre polynomials are important special cases. An analytical formula for the connection coefficients between Hermite and Laguerre polynomials is also obtained.

  14. Simultaneous stochastic inversion for geomagnetic main field and secular variation. I - A large-scale inverse problem

    NASA Technical Reports Server (NTRS)

    Bloxham, Jeremy

    1987-01-01

    The method of stochastic inversion is extended to the simultaneous inversion of both main field and secular variation. In the present method, the time dependency is represented by an expansion in Legendre polynomials, resulting in a simple diagonal form for the a priori covariance matrix. The efficient preconditioned Broyden-Fletcher-Goldfarb-Shanno algorithm is used to solve the large system of equations resulting from expansion of the field spatially to spherical harmonic degree 14 and temporally to degree 8. Application of the method to observatory data spanning the 1900-1980 period results in a data fit of better than 30 nT, while providing temporally and spatially smoothly varying models of the magnetic field at the core-mantle boundary.

  15. Light field creating and imaging with different order intensity derivatives

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Jiang, Huan

    2014-10-01

    Microscopic image restoration and reconstruction is a challenging topic in the image processing and computer vision, which can be widely applied to life science, biology and medicine etc. A microscopic light field creating and three dimensional (3D) reconstruction method is proposed for transparent or partially transparent microscopic samples, which is based on the Taylor expansion theorem and polynomial fitting. Firstly the image stack of the specimen is divided into several groups in an overlapping or non-overlapping way along the optical axis, and the first image of every group is regarded as reference image. Then different order intensity derivatives are calculated using all the images of every group and polynomial fitting method based on the assumption that the structure of the specimen contained by the image stack in a small range along the optical axis are possessed of smooth and linear property. Subsequently, new images located any position from which to reference image the distance is Δz along the optical axis can be generated by means of Taylor expansion theorem and the calculated different order intensity derivatives. Finally, the microscopic specimen can be reconstructed in 3D form using deconvolution technology and all the images including both the observed images and the generated images. The experimental results show the effectiveness and feasibility of our method.

  16. New template family for the detection of gravitational waves from comparable-mass black hole binaries

    NASA Astrophysics Data System (ADS)

    Porter, Edward K.

    2007-11-01

    In order to improve the phasing of the comparable-mass waveform as we approach the last stable orbit for a system, various resummation methods have been used to improve the standard post-Newtonian waveforms. In this work we present a new family of templates for the detection of gravitational waves from the inspiral of two comparable-mass black hole binaries. These new adiabatic templates are based on reexpressing the derivative of the binding energy and the gravitational wave flux functions in terms of shifted Chebyshev polynomials. The Chebyshev polynomials are a useful tool in numerical methods as they display the fastest convergence of any of the orthogonal polynomials. In this case they are also particularly useful as they eliminate one of the features that plagues the post-Newtonian expansion. The Chebyshev binding energy now has information at all post-Newtonian orders, compared to the post-Newtonian templates which only have information at full integer orders. In this work, we compare both the post-Newtonian and Chebyshev templates against a fiducially exact waveform. This waveform is constructed from a hybrid method of using the test-mass results combined with the mass dependent parts of the post-Newtonian expansions for the binding energy and flux functions. Our results show that the Chebyshev templates achieve extremely high fitting factors at all post-Newtonian orders and provide excellent parameter extraction. We also show that this new template family has a faster Cauchy convergence, gives a better prediction of the position of the last stable orbit and in general recovers higher Signal-to-Noise ratios than the post-Newtonian templates.

  17. High-Order Polynomial Expansions (HOPE) for flux-vector splitting

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Chris J., Jr.

    1991-01-01

    The Van Leer flux splitting is known to produce excessive numerical dissipation for Navier-Stokes calculations. Researchers attempt to remedy this deficiency by introducing a higher order polynomial expansion (HOPE) for the mass flux. In addition to Van Leer's splitting, a term is introduced so that the mass diffusion error vanishes at M equals 0. Several splittings for pressure are proposed and examined. The effectiveness of the HOPE scheme is illustrated for 1-D hypersonic conical viscous flow and 2-D supersonic shock-wave boundary layer interactions. Also, the authors give the weakness of the scheme and suggest areas for further investigation.

  18. Wavefront reconstruction from non-modulated pyramid wavefront sensor data using a singular value type expansion

    NASA Astrophysics Data System (ADS)

    Hutterer, Victoria; Ramlau, Ronny

    2018-03-01

    The new generation of extremely large telescopes includes adaptive optics systems to correct for atmospheric blurring. In this paper, we present a new method of wavefront reconstruction from non-modulated pyramid wavefront sensor data. The approach is based on a simplified sensor model represented as the finite Hilbert transform of the incoming phase. Due to the non-compactness of the finite Hilbert transform operator the classical theory for singular systems is not applicable. Nevertheless, we can express the Moore-Penrose inverse as a singular value type expansion with weighted Chebychev polynomials.

  19. A Near to Far Transformation using Spherical Expansions Phase 1: Verification on Simulated Antennas

    DTIC Science & Technology

    2014-09-01

    Antenna Pattern Range. . . . . 75 List of Tables 1 Notation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Legendre polynomials ...first kind Pmn (x) are [3, Equation 12.84 and footnote]: Pmn (x) := (−1)m(1− x2)m/2 dm dxm Pn(x), where Pn(x)’s are the Legendre polynomials . There is the...n ) (4) 9 that computes Pmn (x) = 0 for m > n (5) Table 2 lists the initial Legendre polynomials and their derivatives. Figure 8 plots the first few

  20. New separated polynomial solutions to the Zernike system on the unit disk and interbasis expansion.

    PubMed

    Pogosyan, George S; Wolf, Kurt Bernardo; Yakhno, Alexander

    2017-10-01

    The differential equation proposed by Frits Zernike to obtain a basis of polynomial orthogonal solutions on the unit disk to classify wavefront aberrations in circular pupils is shown to have a set of new orthonormal solution bases involving Legendre and Gegenbauer polynomials in nonorthogonal coordinates, close to Cartesian ones. We find the overlaps between the original Zernike basis and a representative of the new set, which turn out to be Clebsch-Gordan coefficients.

  1. A class of reduced-order models in the theory of waves and stability.

    PubMed

    Chapman, C J; Sorokin, S V

    2016-02-01

    This paper presents a class of approximations to a type of wave field for which the dispersion relation is transcendental. The approximations have two defining characteristics: (i) they give the field shape exactly when the frequency and wavenumber lie on a grid of points in the (frequency, wavenumber) plane and (ii) the approximate dispersion relations are polynomials that pass exactly through points on this grid. Thus, the method is interpolatory in nature, but the interpolation takes place in (frequency, wavenumber) space, rather than in physical space. Full details are presented for a non-trivial example, that of antisymmetric elastic waves in a layer. The method is related to partial fraction expansions and barycentric representations of functions. An asymptotic analysis is presented, involving Stirling's approximation to the psi function, and a logarithmic correction to the polynomial dispersion relation.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaucage, Timothy R; Beenfeldt, Eric P; Speakman, Scott A

    Among the langasite family of crystals (LGX), the three most popular materials are langasite (LGS, La3Ga5SiO14), langatate (LGT, La3Ga5.5Ta0.5O14) and langanite (LGN, La3Ga5.5Nb0.5O14). The LGX crystals have received significant attention for acoustic wave (AW) device applications due to several properties, which include: (1) piezoelectric constants about two and a half times those of quartz, thus allowing the design of larger bandwidth filters; (2) existence of temperature compensated orientations; (3) high density, with potential for reduced vibration and acceleration sensitivity; and (4) possibility of operation at high temperatures, since the LGX crystals do not present phase changes up to their meltingmore » point above 1400degC. The LGX crystals' capability to operate at elevated temperatures calls for an investigation on the growth quality and the consistency of these materials' properties at high temperature. One of the fundamental crystal properties is the thermal expansion coefficients in the entire temperature range where the material is operational. This work focuses on the measurement of the LGT thermal expansion coefficients from room temperature (25degC) to 1200degC. Two methods of extracting the thermal expansion coefficients have been used and compared: (a) dual push-rod dilatometry, which provides the bulk expansion; and (b) x-ray powder diffraction, which provides the lattice expansion. Both methods were performed over the entire temperature range and considered multiple samples taken from <001> Czochralski grown LGT material. The thermal coefficients of expansion were extracted by approximating each expansion data set to a third order polynomial fit over three temperature ranges reported in this work: 25degC to 400degC, 400degC to 900degC, 900degC to 1200degC. An accuracy of fit better than 35ppm for the bulk expansion and better than 10ppm for the lattice expansion have been obtained with the aforementioned polynomial fitting. The percentage difference between the bulk and the lattice fitted expansion responses over the entire temperature range of 25degC to 1200degC is less than 2% for the three crystalline axes, which indicates the high quality and growth consistency of the LGT crystal measured« less

  3. Convergence of the Light-Front Coupled-Cluster Method in Scalar Yukawa Theory

    NASA Astrophysics Data System (ADS)

    Usselman, Austin

    We use Fock-state expansions and the Light-Front Coupled-Cluster (LFCC) method to study mass eigenvalue problems in quantum field theory. Specifically, we study convergence of the method in scalar Yukawa theory. In this theory, a single charged particle is surrounded by a cloud of neutral particles. The charged particle can create or annihilate neutral particles, causing the n-particle state to depend on the n + 1 and n - 1-particle state. Fock state expansion leads to an infinite set of coupled equations where truncation is required. The wave functions for the particle states are expanded in a basis of symmetric polynomials and a generalized eigenvalue problem is solved for the mass eigenvalue. The mass eigenvalue problem is solved for multiple values for the coupling strength while the number of particle states and polynomial basis order are increased. Convergence of the mass eigenvalue solutions is then obtained. Three mass ratios between the charged particle and neutral particles were studied. This includes a massive charged particle, equal masses and massive neutral particles. Relative probability between states can also be explored for more detailed understanding of the process of convergence with respect to the number of Fock sectors. The reliance on higher order particle states depended on how large the mass of the charge particle was. The higher the mass of the charged particle, the more the system depended on higher order particle states. The LFCC method solves this same mass eigenvalue problem using an exponential operator. This exponential operator can then be truncated instead to form a finite system of equations that can be solved using a built in system solver provided in most computational environments, such as MatLab and Mathematica. First approximation in the LFCC method allows for only one particle to be created by the new operator and proved to be not powerful enough to match the Fock state expansion. The second order approximation allowed one and two particles to be created by the new operator and converged to the Fock state expansion results. This showed the LFCC method to be a reliable replacement method for solving quantum field theory problems.

  4. Processing short-term and long-term information with a combination of polynomial approximation techniques and time-delay neural networks.

    PubMed

    Fuchs, Erich; Gruber, Christian; Reitmaier, Tobias; Sick, Bernhard

    2009-09-01

    Neural networks are often used to process temporal information, i.e., any kind of information related to time series. In many cases, time series contain short-term and long-term trends or behavior. This paper presents a new approach to capture temporal information with various reference periods simultaneously. A least squares approximation of the time series with orthogonal polynomials will be used to describe short-term trends contained in a signal (average, increase, curvature, etc.). Long-term behavior will be modeled with the tapped delay lines of a time-delay neural network (TDNN). This network takes the coefficients of the orthogonal expansion of the approximating polynomial as inputs such considering short-term and long-term information efficiently. The advantages of the method will be demonstrated by means of artificial data and two real-world application examples, the prediction of the user number in a computer network and online tool wear classification in turning.

  5. DIFFERENTIAL CROSS SECTION ANALYSIS IN KAON PHOTOPRODUCTION USING ASSOCIATED LEGENDRE POLYNOMIALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. T. P. HUTAURUK, D. G. IRELAND, G. ROSNER

    2009-04-01

    Angular distributions of differential cross sections from the latest CLAS data sets,6 for the reaction γ + p→K+ + Λ have been analyzed using associated Legendre polynomials. This analysis is based upon theoretical calculations in Ref. 1 where all sixteen observables in kaon photoproduction can be classified into four Legendre classes. Each observable can be described by an expansion of associated Legendre polynomial functions. One of the questions to be addressed is how many associated Legendre polynomials are required to describe the data. In this preliminary analysis, we used data models with different numbers of associated Legendre polynomials. We thenmore » compared these models by calculating posterior probabilities of the models. We found that the CLAS data set needs no more than four associated Legendre polynomials to describe the differential cross section data. In addition, we also show the extracted coefficients of the best model.« less

  6. Limit Cycle Bifurcations by Perturbing a Piecewise Hamiltonian System with a Double Homoclinic Loop

    NASA Astrophysics Data System (ADS)

    Xiong, Yanqin

    2016-06-01

    This paper is concerned with the bifurcation problem of limit cycles by perturbing a piecewise Hamiltonian system with a double homoclinic loop. First, the derivative of the first Melnikov function is provided. Then, we use it, together with the analytic method, to derive the asymptotic expansion of the first Melnikov function near the loop. Meanwhile, we present the first coefficients in the expansion, which can be applied to study the limit cycle bifurcation near the loop. We give sufficient conditions for this system to have 14 limit cycles in the neighborhood of the loop. As an application, a piecewise polynomial Liénard system is investigated, finding six limit cycles with the help of the obtained method.

  7. Non-axisymmetric Aberration Patterns from Wide-field Telescopes Using Spin-weighted Zernike Polynomials

    DOE PAGES

    Kent, Stephen M.

    2018-02-15

    If the optical system of a telescope is perturbed from rotational symmetry, the Zernike wavefront aberration coefficients describing that system can be expressed as a function of position in the focal plane using spin-weighted Zernike polynomials. Methodologies are presented to derive these polynomials to arbitrary order. This methodology is applied to aberration patterns produced by a misaligned Ritchey Chretian telescope and to distortion patterns at the focal plane of the DESI optical corrector, where it is shown to provide a more efficient description of distortion than conventional expansions.

  8. Model-assisted probability of detection of flaws in aluminum blocks using polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming

    2018-04-01

    Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.

  9. Beyond Euler angles: exploiting the angle-axis parametrization in a multipole expansion of the rotation operator.

    PubMed

    Siemens, Mark; Hancock, Jason; Siminovitch, David

    2007-02-01

    Euler angles (alpha,beta,gamma) are cumbersome from a computational point of view, and their link to experimental parameters is oblique. The angle-axis {Phi, n} parametrization, especially in the form of quaternions (or Euler-Rodrigues parameters), has served as the most promising alternative, and they have enjoyed considerable success in rf pulse design and optimization. We focus on the benefits of angle-axis parameters by considering a multipole operator expansion of the rotation operator D(Phi, n), and a Clebsch-Gordan expansion of the rotation matrices D(MM')(J)(Phi, n). Each of the coefficients in the Clebsch-Gordan expansion is proportional to the product of a spherical harmonic of the vector n specifying the axis of rotation, Y(lambdamu)(n), with a fixed function of the rotation angle Phi, a Gegenbauer polynomial C(2J-lambda)(lambda+1)(cosPhi/2). Several application examples demonstrate that this Clebsch-Gordan expansion gives easy and direct access to many of the parameters of experimental interest, including coherence order changes (isolated in the Clebsch-Gordan coefficients), and rotation angle (isolated in the Gegenbauer polynomials).

  10. Numerical study of the stress-strain state of reinforced plate on an elastic foundation by the Bubnov-Galerkin method

    NASA Astrophysics Data System (ADS)

    Beskopylny, Alexey; Kadomtseva, Elena; Strelnikov, Grigory

    2017-10-01

    The stress-strain state of a rectangular slab resting on an elastic foundation is considered. The slab material is isotropic. The slab has stiffening ribs that directed parallel to both sides of the plate. Solving equations are obtained for determining the deflection for various mechanical and geometric characteristics of the stiffening ribs which are parallel to different sides of the plate, having different rigidity for bending and torsion. The calculation scheme assumes an orthotropic slab having different cylindrical stiffness in two mutually perpendicular directions parallel to the reinforcing ribs. An elastic foundation is adopted by Winkler model. To determine the deflection the Bubnov-Galerkin method is used. The deflection is taken in the form of an expansion in a series with unknown coefficients by special polynomials, which are a combination of Legendre polynomials.

  11. Determination of the expansion of the potential of the earth's normal gravitational field

    NASA Astrophysics Data System (ADS)

    Kochiev, A. A.

    The potential of the generalized problem of 2N fixed centers is expanded in a polynomial and Legendre function series. Formulas are derived for the expansion coefficients, and the disturbing function of the problem is constructed in an explicit form.

  12. Unconditionally stable WLP-FDTD method for the modeling of electromagnetic wave propagation in gyrotropic materials.

    PubMed

    Li, Zheng-Wei; Xi, Xiao-Li; Zhang, Jin-Sheng; Liu, Jiang-fan

    2015-12-14

    The unconditional stable finite-difference time-domain (FDTD) method based on field expansion with weighted Laguerre polynomials (WLPs) is applied to model electromagnetic wave propagation in gyrotropic materials. The conventional Yee cell is modified to have the tightly coupled current density components located at the same spatial position. The perfectly matched layer (PML) is formulated in a stretched-coordinate (SC) system with the complex-frequency-shifted (CFS) factor to achieve good absorption performance. Numerical examples are shown to validate the accuracy and efficiency of the proposed method.

  13. Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN

    NASA Astrophysics Data System (ADS)

    Talbot, Paul W.

    As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.

  14. Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkó, Zoltán, E-mail: Z.Perko@tudelft.nl; Gilli, Luca, E-mail: Gilli@nrg.eu; Lathouwers, Danny, E-mail: D.Lathouwers@tudelft.nl

    2014-03-01

    The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work ismore » focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15–20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.« less

  15. Sum-of-squares of polynomials approach to nonlinear stability of fluid flows: an example of application

    PubMed Central

    Tutty, O.

    2015-01-01

    With the goal of providing the first example of application of a recently proposed method, thus demonstrating its ability to give results in principle, global stability of a version of the rotating Couette flow is examined. The flow depends on the Reynolds number and a parameter characterizing the magnitude of the Coriolis force. By converting the original Navier–Stokes equations to a finite-dimensional uncertain dynamical system using a partial Galerkin expansion, high-degree polynomial Lyapunov functionals were found by sum-of-squares of polynomials optimization. It is demonstrated that the proposed method allows obtaining the exact global stability limit for this flow in a range of values of the parameter characterizing the Coriolis force. Outside this range a lower bound for the global stability limit was obtained, which is still better than the energy stability limit. In the course of the study, several results meaningful in the context of the method used were also obtained. Overall, the results obtained demonstrate the applicability of the recently proposed approach to global stability of the fluid flows. To the best of our knowledge, it is the first case in which global stability of a fluid flow has been proved by a generic method for the value of a Reynolds number greater than that which could be achieved with the energy stability approach. PMID:26730219

  16. Comparison of permutationally invariant polynomials, neural networks, and Gaussian approximation potentials in representing water interactions through many-body expansions

    NASA Astrophysics Data System (ADS)

    Nguyen, Thuong T.; Székely, Eszter; Imbalzano, Giulio; Behler, Jörg; Csányi, Gábor; Ceriotti, Michele; Götz, Andreas W.; Paesani, Francesco

    2018-06-01

    The accurate representation of multidimensional potential energy surfaces is a necessary requirement for realistic computer simulations of molecular systems. The continued increase in computer power accompanied by advances in correlated electronic structure methods nowadays enables routine calculations of accurate interaction energies for small systems, which can then be used as references for the development of analytical potential energy functions (PEFs) rigorously derived from many-body (MB) expansions. Building on the accuracy of the MB-pol many-body PEF, we investigate here the performance of permutationally invariant polynomials (PIPs), neural networks, and Gaussian approximation potentials (GAPs) in representing water two-body and three-body interaction energies, denoting the resulting potentials PIP-MB-pol, Behler-Parrinello neural network-MB-pol, and GAP-MB-pol, respectively. Our analysis shows that all three analytical representations exhibit similar levels of accuracy in reproducing both two-body and three-body reference data as well as interaction energies of small water clusters obtained from calculations carried out at the coupled cluster level of theory, the current gold standard for chemical accuracy. These results demonstrate the synergy between interatomic potentials formulated in terms of a many-body expansion, such as MB-pol, that are physically sound and transferable, and machine-learning techniques that provide a flexible framework to approximate the short-range interaction energy terms.

  17. Using Taylor Expansions to Prepare Students for Calculus

    ERIC Educational Resources Information Center

    Lutzer, Carl V.

    2011-01-01

    We propose an alternative to the standard introduction to the derivative. Instead of using limits of difference quotients, students develop Taylor expansions of polynomials. This alternative allows students to develop many of the central ideas about the derivative at an intuitive level, using only skills and concepts from precalculus, and…

  18. Non-axisymmetric Aberration Patterns from Wide-field Telescopes Using Spin-weighted Zernike Polynomials

    NASA Astrophysics Data System (ADS)

    Kent, Stephen M.

    2018-04-01

    If the optical system of a telescope is perturbed from rotational symmetry, the Zernike wavefront aberration coefficients describing that system can be expressed as a function of position in the focal plane using spin-weighted Zernike polynomials. Methodologies are presented to derive these polynomials to arbitrary order. This methodology is applied to aberration patterns produced by a misaligned Ritchey–Chrétien telescope and to distortion patterns at the focal plane of the DESI optical corrector, where it is shown to provide a more efficient description of distortion than conventional expansions.

  19. Hermite Polynomials and the Inverse Problem for Collisionless Equilibria

    NASA Astrophysics Data System (ADS)

    Allanson, O.; Neukirch, T.; Troscheit, S.; Wilson, F.

    2017-12-01

    It is long established that Hermite polynomial expansions in either velocity or momentum space can elegantly encode the non-Maxwellian velocity-space structure of a collisionless plasma distribution function (DF). In particular, Hermite polynomials in the canonical momenta naturally arise in the consideration of the 'inverse problem in collisionless equilibria' (IPCE): "for a given macroscopic/fluid equilibrium, what are the self-consistent Vlasov-Maxwell equilibrium DFs?". This question is of particular interest for the equilibrium and stability properties of a given macroscopic configuration, e.g. a current sheet. It can be relatively straightforward to construct a formal solution to IPCE by a Hermite expansion method, but several important questions remain regarding the use of this method. We present recent work that considers the necessary conditions of non-negativity, convergence, and the existence of all moments of an equilibrium DF solution found for IPCE. We also establish meaningful analogies between the equations that link the microscopic and macrosopic descriptions of the Vlasov-Maxwell equilibrium, and those that solve the initial value problem for the heat equation. In the language of the heat equation, IPCE poses the pressure tensor as the 'present' heat distribution over an infinite domain, and the non-Maxwellian features of the DF as the 'past' distribution. We find sufficient conditions for the convergence of the Hermite series representation of the DF, and prove that the non-negativity of the DF can be dependent on the magnetisation of the plasma. For DFs that decay at least as quickly as exp(-v^2/4), we show non-negativity is guaranteed for at least a finite range of magnetisation values, as parameterised by the ratio of the Larmor radius to the gradient length scale. 1. O. Allanson, T. Neukirch, S. Troscheit & F. Wilson: From one-dimensional fields to Vlasov equilibria: theory and application of Hermite polynomials, Journal of Plasma Physics, 82, 905820306, 2016 2. O. Allanson, S. Troscheit & T. Neukirch: The inverse problem for collisionless plasma equilibria (invited paper for IMA Journal of Applied Mathematics, under review)

  20. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  1. Intrusive Method for Uncertainty Quantification in a Multiphase Flow Solver

    NASA Astrophysics Data System (ADS)

    Turnquist, Brian; Owkes, Mark

    2016-11-01

    Uncertainty quantification (UQ) is a necessary, interesting, and often neglected aspect of fluid flow simulations. To determine the significance of uncertain initial and boundary conditions, a multiphase flow solver is being created which extends a single phase, intrusive, polynomial chaos scheme into multiphase flows. Reliably estimating the impact of input uncertainty on design criteria can help identify and minimize unwanted variability in critical areas, and has the potential to help advance knowledge in atomizing jets, jet engines, pharmaceuticals, and food processing. Use of an intrusive polynomial chaos method has been shown to significantly reduce computational cost over non-intrusive collocation methods such as Monte-Carlo. This method requires transforming the model equations into a weak form through substitution of stochastic (random) variables. Ultimately, the model deploys a stochastic Navier Stokes equation, a stochastic conservative level set approach including reinitialization, as well as stochastic normals and curvature. By implementing these approaches together in one framework, basic problems may be investigated which shed light on model expansion, uncertainty theory, and fluid flow in general. NSF Grant Number 1511325.

  2. The acoustic power of a vibrating clamped circular plate revisited in the wide low frequency range using expansion into the radial polynomials.

    PubMed

    Rdzanek, Wojciech P

    2016-06-01

    This study deals with the classical problem of sound radiation of an excited clamped circular plate embedded into a flat rigid baffle. The system of the two coupled differential equations is solved, one for the excited and damped vibrations of the plate and the other one-the Helmholtz equation. An approach using the expansion into radial polynomials leads to results for the modal impedance coefficients useful for a comprehensive numerical analysis of sound radiation. The results obtained are accurate and efficient in a wide low frequency range and can easily be adopted for a simply supported circular plate. The fluid loading is included providing accurate results in resonance.

  3. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  4. Analysis of the impacts of horizontal translation and scaling on wavefront approximation coefficients with rectangular pupils for Chebyshev and Legendre polynomials.

    PubMed

    Sun, Wenqing; Chen, Lei; Tuya, Wulan; He, Yong; Zhu, Rihong

    2013-12-01

    Chebyshev and Legendre polynomials are frequently used in rectangular pupils for wavefront approximation. Ideally, the dataset completely fits with the polynomial basis, which provides the full-pupil approximation coefficients and the corresponding geometric aberrations. However, if there are horizontal translation and scaling, the terms in the original polynomials will become the linear combinations of the coefficients of the other terms. This paper introduces analytical expressions for two typical situations after translation and scaling. With a small translation, first-order Taylor expansion could be used to simplify the computation. Several representative terms could be selected as inputs to compute the coefficient changes before and after translation and scaling. Results show that the outcomes of the analytical solutions and the approximated values under discrete sampling are consistent. With the computation of a group of randomly generated coefficients, we contrasted the changes under different translation and scaling conditions. The larger ratios correlate the larger deviation from the approximated values to the original ones. Finally, we analyzed the peak-to-valley (PV) and root mean square (RMS) deviations from the uses of the first-order approximation and the direct expansion under different translation values. The results show that when the translation is less than 4%, the most deviated 5th term in the first-order 1D-Legendre expansion has a PV deviation less than 7% and an RMS deviation less than 2%. The analytical expressions and the computed results under discrete sampling given in this paper for the multiple typical function basis during translation and scaling in the rectangular areas could be applied in wavefront approximation and analysis.

  5. Final Shape of Precision Molded Optics: Part 1 - Computational Approach, Material Definitions and the Effect of Lens Shape

    DTIC Science & Technology

    2012-05-15

    subroutine by adding time-dependence to the thermal expansion coefficient. The user subroutine was written in Intel Visual Fortran that is compatible...temperature history dependent expansion and contraction, and the molds were modeled as elastic taking into account both mechanical and thermal strain. In...behavior was approximated by assuming the thermal coefficient of expansion to be a fourth order polynomial function of temperature. The authors

  6. Breathers, quasi-periodic and travelling waves for a generalized ?-dimensional Yu-Toda-Sasa-Fukayama equation in fluids

    NASA Astrophysics Data System (ADS)

    Hu, Wen-Qiang; Gao, Yi-Tian; Zhao, Chen; Jia, Shu-Liang; Lan, Zhong-Zhou

    2017-07-01

    Under investigation in this paper is a generalized ?-dimensional Yu-Toda-Sasa-Fukayama equation for the interfacial wave in a two-layer fluid or the elastic quasi-plane wave in a liquid lattice. By virtue of the binary Bell polynomials, bilinear form of this equation is obtained. With the help of the bilinear form, N-soliton solutions are obtained via the Hirota method, and a bilinear Bäcklund transformation is derived to verify the integrability. Homoclinic breather waves are obtained according to the homoclinic test approach, which is not only the space-periodic breather but also the time-periodic breather via the graphic analysis. Via the Riemann theta function, quasi one-periodic waves are constructed, which can be viewed as a superposition of the overlapping solitary waves, placed one period apart. Finally, soliton-like, periodical triangle-type, rational-type and solitary bell-type travelling waves are obtained by means of the polynomial expansion method.

  7. Diffraction Theory for Polygonal Apertures

    DTIC Science & Technology

    1988-07-01

    and utilized oblate spheroidal vector wave functions, and Nomura and Katsura (1955), who employed an expansion of the hypergeometric polynomial ...21 2 - 1 4, 2 - 1 3 4k3 - 3k 8 3 - 4 factor relates directly to the orthogonality relations for the Chebyshev polynomials given below. I T(Q TieQdk...convergence. 3.1.2.2 Gaussian Illuminated Corner In the sample calculation just discussed we discovered some of the basic characteristics of the GBE

  8. Spectral solver for multi-scale plasma physics simulations with dynamically adaptive number of moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vencels, Juris; Delzanno, Gian Luca; Johnson, Alec

    2015-06-01

    A spectral method for kinetic plasma simulations based on the expansion of the velocity distribution function in a variable number of Hermite polynomials is presented. The method is based on a set of non-linear equations that is solved to determine the coefficients of the Hermite expansion satisfying the Vlasov and Poisson equations. In this paper, we first show that this technique combines the fluid and kinetic approaches into one framework. Second, we present an adaptive strategy to increase and decrease the number of Hermite functions dynamically during the simulation. The technique is applied to the Landau damping and two-stream instabilitymore » test problems. Performance results show 21% and 47% saving of total simulation time in the Landau and two-stream instability test cases, respectively.« less

  9. Stochastic uncertainty analysis for unconfined flow systems

    USGS Publications Warehouse

    Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming

    2006-01-01

    A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen‐Loeve decomposition‐based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen‐Loeve decomposition, polynomial expansion, and perturbation methods. The random log‐transformed hydraulic conductivity field (lnKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of lnKS. Next, head h is decomposed as a perturbation expansion series Σh(m), where h(m) represents the mth‐order head term with respect to the standard deviation of lnKS. Then h(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients hi1,i2,...,im(m) are deterministic and solved sequentially from low to high expansion orders using MODFLOW‐2000. Finally, the statistics of head and flux are computed using simple algebraic operations on hi1,i2,...,im(m). A series of numerical test results in 2‐D and 3‐D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique.

  10. Orthonormal aberration polynomials for anamorphic optical imaging systems with circular pupils.

    PubMed

    Mahajan, Virendra N

    2012-06-20

    In a recent paper, we considered the classical aberrations of an anamorphic optical imaging system with a rectangular pupil, representing the terms of a power series expansion of its aberration function. These aberrations are inherently separable in the Cartesian coordinates (x,y) of a point on the pupil. Accordingly, there is x-defocus and x-coma, y-defocus and y-coma, and so on. We showed that the aberration polynomials orthonormal over the pupil and representing balanced aberrations for such a system are represented by the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point; for example, L(l)(x)L(m)(y), where l and m are positive integers (including zero) and L(l)(x), for example, represents an orthonormal Legendre polynomial of degree l in x. The compound two-dimensional (2D) Legendre polynomials, like the classical aberrations, are thus also inherently separable in the Cartesian coordinates of the pupil point. Moreover, for every orthonormal polynomial L(l)(x)L(m)(y), there is a corresponding orthonormal polynomial L(l)(y)L(m)(x) obtained by interchanging x and y. These polynomials are different from the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil. In this paper, we show that the orthonormal aberration polynomials for an anamorphic system with a circular pupil, obtained by the Gram-Schmidt orthogonalization of the 2D Legendre polynomials, are not separable in the two coordinates. Moreover, for a given polynomial in x and y, there is no corresponding polynomial obtained by interchanging x and y. For example, there are polynomials representing x-defocus, balanced x-coma, and balanced x-spherical aberration, but no corresponding y-aberration polynomials. The missing y-aberration terms are contained in other polynomials. We emphasize that the Zernike circle polynomials, although orthogonal over a circular pupil, are not suitable for an anamorphic system as they do not represent balanced aberrations for such a system.

  11. Linear precoding based on polynomial expansion: reducing complexity in massive MIMO.

    PubMed

    Mueller, Axel; Kammoun, Abla; Björnson, Emil; Debbah, Mérouane

    Massive multiple-input multiple-output (MIMO) techniques have the potential to bring tremendous improvements in spectral efficiency to future communication systems. Counterintuitively, the practical issues of having uncertain channel knowledge, high propagation losses, and implementing optimal non-linear precoding are solved more or less automatically by enlarging system dimensions. However, the computational precoding complexity grows with the system dimensions. For example, the close-to-optimal and relatively "antenna-efficient" regularized zero-forcing (RZF) precoding is very complicated to implement in practice, since it requires fast inversions of large matrices in every coherence period. Motivated by the high performance of RZF, we propose to replace the matrix inversion and multiplication by a truncated polynomial expansion (TPE), thereby obtaining the new TPE precoding scheme which is more suitable for real-time hardware implementation and significantly reduces the delay to the first transmitted symbol. The degree of the matrix polynomial can be adapted to the available hardware resources and enables smooth transition between simple maximum ratio transmission and more advanced RZF. By deriving new random matrix results, we obtain a deterministic expression for the asymptotic signal-to-interference-and-noise ratio (SINR) achieved by TPE precoding in massive MIMO systems. Furthermore, we provide a closed-form expression for the polynomial coefficients that maximizes this SINR. To maintain a fixed per-user rate loss as compared to RZF, the polynomial degree does not need to scale with the system, but it should be increased with the quality of the channel knowledge and the signal-to-noise ratio.

  12. On the modular structure of the genus-one Type II superstring low energy expansion

    NASA Astrophysics Data System (ADS)

    D'Hoker, Eric; Green, Michael B.; Vanhove, Pierre

    2015-08-01

    The analytic contribution to the low energy expansion of Type II string amplitudes at genus-one is a power series in space-time derivatives with coefficients that are determined by integrals of modular functions over the complex structure modulus of the world-sheet torus. These modular functions are associated with world-sheet vacuum Feynman diagrams and given by multiple sums over the discrete momenta on the torus. In this paper we exhibit exact differential and algebraic relations for a certain infinite class of such modular functions by showing that they satisfy Laplace eigenvalue equations with inhomogeneous terms that are polynomial in non-holomorphic Eisenstein series. Furthermore, we argue that the set of modular functions that contribute to the coefficients of interactions up to order are linear sums of functions in this class and quadratic polynomials in Eisenstein series and odd Riemann zeta values. Integration over the complex structure results in coefficients of the low energy expansion that are rational numbers multiplying monomials in odd Riemann zeta values.

  13. Modified homotopy perturbation method for solving hypersingular integral equations of the first kind.

    PubMed

    Eshkuvatov, Z K; Zulkarnain, F S; Nik Long, N M A; Muminov, Z

    2016-01-01

    Modified homotopy perturbation method (HPM) was used to solve the hypersingular integral equations (HSIEs) of the first kind on the interval [-1,1] with the assumption that the kernel of the hypersingular integral is constant on the diagonal of the domain. Existence of inverse of hypersingular integral operator leads to the convergence of HPM in certain cases. Modified HPM and its norm convergence are obtained in Hilbert space. Comparisons between modified HPM, standard HPM, Bernstein polynomials approach Mandal and Bhattacharya (Appl Math Comput 190:1707-1716, 2007), Chebyshev expansion method Mahiub et al. (Int J Pure Appl Math 69(3):265-274, 2011) and reproducing kernel Chen and Zhou (Appl Math Lett 24:636-641, 2011) are made by solving five examples. Theoretical and practical examples revealed that the modified HPM dominates the standard HPM and others. Finally, it is found that the modified HPM is exact, if the solution of the problem is a product of weights and polynomial functions. For rational solution the absolute error decreases very fast by increasing the number of collocation points.

  14. Orthonormal aberration polynomials for anamorphic optical imaging systems with rectangular pupils.

    PubMed

    Mahajan, Virendra N

    2010-12-20

    The classical aberrations of an anamorphic optical imaging system, representing the terms of a power-series expansion of its aberration function, are separable in the Cartesian coordinates of a point on its pupil. We discuss the balancing of a classical aberration of a certain order with one or more such aberrations of lower order to minimize its variance across a rectangular pupil of such a system. We show that the balanced aberrations are the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point. The compound Legendre polynomials are orthogonal across a rectangular pupil and, like the classical aberrations, are inherently separable in the Cartesian coordinates of the pupil point. They are different from the balanced aberrations and the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil.

  15. Accurate polynomial expressions for the density and specific volume of seawater using the TEOS-10 standard

    NASA Astrophysics Data System (ADS)

    Roquet, F.; Madec, G.; McDougall, Trevor J.; Barker, Paul M.

    2015-06-01

    A new set of approximations to the standard TEOS-10 equation of state are presented. These follow a polynomial form, making it computationally efficient for use in numerical ocean models. Two versions are provided, the first being a fit of density for Boussinesq ocean models, and the second fitting specific volume which is more suitable for compressible models. Both versions are given as the sum of a vertical reference profile (6th-order polynomial) and an anomaly (52-term polynomial, cubic in pressure), with relative errors of ∼0.1% on the thermal expansion coefficients. A 75-term polynomial expression is also presented for computing specific volume, with a better accuracy than the existing TEOS-10 48-term rational approximation, especially regarding the sound speed, and it is suggested that this expression represents a valuable approximation of the TEOS-10 equation of state for hydrographic data analysis. In the last section, practical aspects about the implementation of TEOS-10 in ocean models are discussed.

  16. Uncertain dynamic analysis for rigid-flexible mechanisms with random geometry and material properties

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.

    2017-02-01

    This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.

  17. Probing baryogenesis through the Higgs boson self-coupling

    NASA Astrophysics Data System (ADS)

    Reichert, M.; Eichhorn, A.; Gies, H.; Pawlowski, J. M.; Plehn, T.; Scherer, M. M.

    2018-04-01

    The link between a modified Higgs self-coupling and the strong first-order phase transition necessary for baryogenesis is well explored for polynomial extensions of the Higgs potential. We broaden this argument beyond leading polynomial expansions of the Higgs potential to higher polynomial terms and to nonpolynomial Higgs potentials. For our quantitative analysis we resort to the functional renormalization group, which allows us to evolve the full Higgs potential to higher scales and finite temperature. In all cases we find that a strong first-order phase transition manifests itself in an enhancement of the Higgs self-coupling by at least 50%, implying that such modified Higgs potentials should be accessible at the LHC.

  18. Operational Solution to the Nonlinear Klein-Gordon Equation

    NASA Astrophysics Data System (ADS)

    Bengochea, G.; Verde-Star, L.; Ortigueira, M.

    2018-05-01

    We obtain solutions of the nonlinear Klein-Gordon equation using a novel operational method combined with the Adomian polynomial expansion of nonlinear functions. Our operational method does not use any integral transforms nor integration processes. We illustrate the application of our method by solving several examples and present numerical results that show the accuracy of the truncated series approximations to the solutions. Supported by Grant SEP-CONACYT 220603, the first author was supported by SEP-PRODEP through the project UAM-PTC-630, the third author was supported by Portuguese National Funds through the FCT Foundation for Science and Technology under the project PEst-UID/EEA/00066/2013

  19. Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  20. Reduced-order modeling with sparse polynomial chaos expansion and dimension reduction for evaluating the impact of CO2 and brine leakage on groundwater

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Zheng, L.; Pau, G. S. H.

    2016-12-01

    A careful assessment of the risk associated with geologic CO2 storage is critical to the deployment of large-scale storage projects. While numerical modeling is an indispensable tool for risk assessment, there has been increasing need in considering and addressing uncertainties in the numerical models. However, uncertainty analyses have been significantly hindered by the computational complexity of the model. As a remedy, reduced-order models (ROM), which serve as computationally efficient surrogates for high-fidelity models (HFM), have been employed. The ROM is constructed at the expense of an initial set of HFM simulations, and afterwards can be relied upon to predict the model output values at minimal cost. The ROM presented here is part of National Risk Assessment Program (NRAP) and intends to predict the water quality change in groundwater in response to hypothetical CO2 and brine leakage. The HFM based on which the ROM is derived is a multiphase flow and reactive transport model, with 3-D heterogeneous flow field and complex chemical reactions including aqueous complexation, mineral dissolution/precipitation, adsorption/desorption via surface complexation and cation exchange. Reduced-order modeling techniques based on polynomial basis expansion, such as polynomial chaos expansion (PCE), are widely used in the literature. However, the accuracy of such ROMs can be affected by the sparse structure of the coefficients of the expansion. Failing to identify vanishing polynomial coefficients introduces unnecessary sampling errors, the accumulation of which deteriorates the accuracy of the ROMs. To address this issue, we treat the PCE as a sparse Bayesian learning (SBL) problem, and the sparsity is obtained by detecting and including only the non-zero PCE coefficients one at a time by iteratively selecting the most contributing coefficients. The computational complexity due to predicting the entire 3-D concentration fields is further mitigated by a dimension reduction procedure-proper orthogonal decomposition (POD). Our numerical results show that utilizing the sparse structure and POD significantly enhances the accuracy and efficiency of the ROMs, laying the basis for further analyses that necessitate a large number of model simulations.

  1. Nonperturbative Series Expansion of Green's Functions: The Anatomy of Resonant Inelastic X-Ray Scattering in the Doped Hubbard Model

    NASA Astrophysics Data System (ADS)

    Lu, Yi; Haverkort, Maurits W.

    2017-12-01

    We present a nonperturbative, divergence-free series expansion of Green's functions using effective operators. The method is especially suited for computing correlators of complex operators as a series of correlation functions of simpler forms. We apply the method to study low-energy excitations in resonant inelastic x-ray scattering (RIXS) in doped one- and two-dimensional single-band Hubbard models. The RIXS operator is expanded into polynomials of spin, density, and current operators weighted by fundamental x-ray spectral functions. These operators couple to different polarization channels resulting in simple selection rules. The incident photon energy dependent coefficients help to pinpoint main RIXS contributions from different degrees of freedom. We show in particular that, with parameters pertaining to cuprate superconductors, local spin excitation dominates the RIXS spectral weight over a wide doping range in the cross-polarization channel.

  2. Polynomial equations for science orbits around Europa

    NASA Astrophysics Data System (ADS)

    Cinelli, Marco; Circi, Christian; Ortore, Emiliano

    2015-07-01

    In this paper, the design of science orbits for the observation of a celestial body has been carried out using polynomial equations. The effects related to the main zonal harmonics of the celestial body and the perturbation deriving from the presence of a third celestial body have been taken into account. The third body describes a circular and equatorial orbit with respect to the primary body and, for its disturbing potential, an expansion in Legendre polynomials up to the second order has been considered. These polynomial equations allow the determination of science orbits around Jupiter's satellite Europa, where the third body gravitational attraction represents one of the main forces influencing the motion of an orbiting probe. Thus, the retrieved relationships have been applied to this moon and periodic sun-synchronous and multi-sun-synchronous orbits have been determined. Finally, numerical simulations have been carried out to validate the analytical results.

  3. Automatic differentiation for Fourier series and the radii polynomial approach

    NASA Astrophysics Data System (ADS)

    Lessard, Jean-Philippe; Mireles James, J. D.; Ransford, Julian

    2016-11-01

    In this work we develop a computer-assisted technique for proving existence of periodic solutions of nonlinear differential equations with non-polynomial nonlinearities. We exploit ideas from the theory of automatic differentiation in order to formulate an augmented polynomial system. We compute a numerical Fourier expansion of the periodic orbit for the augmented system, and prove the existence of a true solution nearby using an a-posteriori validation scheme (the radii polynomial approach). The problems considered here are given in terms of locally analytic vector fields (i.e. the field is analytic in a neighborhood of the periodic orbit) hence the computer-assisted proofs are formulated in a Banach space of sequences satisfying a geometric decay condition. In order to illustrate the use and utility of these ideas we implement a number of computer-assisted existence proofs for periodic orbits of the Planar Circular Restricted Three-Body Problem (PCRTBP).

  4. Monograph on the use of the multivariate Gram Charlier series Type A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatayodom, T.; Heydt, G.

    1978-01-01

    The Gram-Charlier series in an infinite series expansion for a probability density function (pdf) in which terms of the series are Hermite polynomials. There are several Gram-Charlier series - the best known is Type A. The Gram-Charlier series, Type A (GCA) exists for both univariate and multivariate random variables. This monograph introduces the multivariate GCA and illustrates its use through several examples. A brief bibliography and discussion of Hermite polynomials is also included. 9 figures, 2 tables.

  5. Are there p-adic knot invariants?

    NASA Astrophysics Data System (ADS)

    Morozov, A. Yu.

    2016-04-01

    We suggest using the Hall-Littlewood version of the Rosso-Jones formula to define the germs of p-adic HOMFLY-PT polynomials for torus knots [ m, n] as coefficients of superpolynomials in a q-expansion. In this form, they have at least the [ m, n] ↔ [ n, m] topological invariance. This opens a new possibility to interpret superpolynomials as p-adic deformations of HOMFLY polynomials and poses a question of generalizing to other knot families, which is a substantial problem for several branches of modern theory.

  6. Uniform versus Gaussian Beams: A Comparison of the Effects of Diffraction, Obscuration, and Aberations.

    DTIC Science & Technology

    1985-12-16

    balancing is discussed for the two types of beams. Zernike polynomials representing balanced primary aberration for uniform and Gaussian annular beams...plotted on a logarithmic scale (Figs. 3c and 3d ). The positions of maxima and minima and the correspond- ing irradiance and encircled-power values are...aberration 2 4 (representing a term in the expansion of the aberration in terms of a set of " Zernike " polynomials which are orthonormal over the amplitude

  7. Solutions of interval type-2 fuzzy polynomials using a new ranking method

    NASA Astrophysics Data System (ADS)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani

    2015-10-01

    A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.

  8. Uncertainty Quantification for Polynomial Systems via Bernstein Expansions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper presents a unifying framework to uncertainty quantification for systems having polynomial response metrics that depend on both aleatory and epistemic uncertainties. The approach proposed, which is based on the Bernstein expansions of polynomials, enables bounding the range of moments and failure probabilities of response metrics as well as finding supersets of the extreme epistemic realizations where the limits of such ranges occur. These bounds and supersets, whose analytical structure renders them free of approximation error, can be made arbitrarily tight with additional computational effort. Furthermore, this framework enables determining the importance of particular uncertain parameters according to the extent to which they affect the first two moments of response metrics and failure probabilities. This analysis enables determining the parameters that should be considered uncertain as well as those that can be assumed to be constants without incurring significant error. The analytical nature of the approach eliminates the numerical error that characterizes the sampling-based techniques commonly used to propagate aleatory uncertainties as well as the possibility of under predicting the range of the statistic of interest that may result from searching for the best- and worstcase epistemic values via nonlinear optimization or sampling.

  9. Neutron diffraction determination of the cell dimensions and thermal expansion of the fluoroperovskite KMgF3 from 293 to 3.6 K

    NASA Astrophysics Data System (ADS)

    Mitchell, Roger H.; Cranswick, Lachlan M. D.; Swainson, Ian

    2006-11-01

    The cell dimensions of the fluoroperovskite KMgF3 synthesized by solid state methods have been determined by powder neutron diffraction and Rietveld refinement over the temperature range 293 3.6 K using Pt metal as an internal standard for calibration of the neutron wavelength. These data demonstrate conclusively that cubic Pmoverline{3} m KMgF3 does not undergo any phase transitions to structures of lower symmetry with decreasing temperature. Cell dimensions range from 3.9924(2) Å at 293 K to 3.9800(2) Å at 3.6 K, and are essentially constant within experimental error from 50 to 3.6 K. The thermal expansion data are described using a fourth order polynomial function.

  10. Stress-strain state on non-thin plates and shells. Generalized theory (survey)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nemish, Yu.N.; Khoma, I.Yu.

    1994-05-01

    In the first part of this survey, we examined exact and approximate analytic solutions of specific problems for thick shells and plates obtained on the basis of three-dimensional equations of the mathematical theory of elasticity. The second part of the survey, presented here, is devoted to systematization and analysis of studies made in regard to a generalized theory of plates and shells based on expansion of the sought functions into Fourier series in Legendre polynomials of the thickness coordinate. Methods are described for constructing systems of differential equations in the coefficients of the expansions (as functions of two independent variablesmore » and time), along with the corresponding boundary and initial conditions. Matters relating to substantiation of the given approach and its generalizations are also discussed.« less

  11. Study on the mapping of dark matter clustering from real space to redshift space

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Song, Yong-Seon

    2016-08-01

    The mapping of dark matter clustering from real space to redshift space introduces the anisotropic property to the measured density power spectrum in redshift space, known as the redshift space distortion effect. The mapping formula is intrinsically non-linear, which is complicated by the higher order polynomials due to indefinite cross correlations between the density and velocity fields, and the Finger-of-God effect due to the randomness of the peculiar velocity field. Whilst the full higher order polynomials remain unknown, the other systematics can be controlled consistently within the same order truncation in the expansion of the mapping formula, as shown in this paper. The systematic due to the unknown non-linear density and velocity fields is removed by separately measuring all terms in the expansion directly using simulations. The uncertainty caused by the velocity randomness is controlled by splitting the FoG term into two pieces, 1) the ``one-point" FoG term being independent of the separation vector between two different points, and 2) the ``correlated" FoG term appearing as an indefinite polynomials which is expanded in the same order as all other perturbative polynomials. Using 100 realizations of simulations, we find that the Gaussian FoG function with only one scale-independent free parameter works quite well, and that our new mapping formulation accurately reproduces the observed 2-dimensional density power spectrum in redshift space at the smallest scales by far, up to k~ 0.2 Mpc-1, considering the resolution of future experiments.

  12. From Jack to Double Jack Polynomials via the Supersymmetric Bridge

    NASA Astrophysics Data System (ADS)

    Lapointe, Luc; Mathieu, Pierre

    2015-07-01

    The Calogero-Sutherland model occurs in a large number of physical contexts, either directly or via its eigenfunctions, the Jack polynomials. The supersymmetric counterpart of this model, although much less ubiquitous, has an equally rich structure. In particular, its eigenfunctions, the Jack superpolynomials, appear to share the very same remarkable combinatorial and structural properties as their non-supersymmetric version. These super-functions are parametrized by superpartitions with fixed bosonic and fermionic degrees. Now, a truly amazing feature pops out when the fermionic degree is sufficiently large: the Jack superpolynomials stabilize and factorize. Their stability is with respect to their expansion in terms of an elementary basis where, in the stable sector, the expansion coefficients become independent of the fermionic degree. Their factorization is seen when the fermionic variables are stripped off in a suitable way which results in a product of two ordinary Jack polynomials (somewhat modified by plethystic transformations), dubbed the double Jack polynomials. Here, in addition to spelling out these results, which were first obtained in the context of Macdonal superpolynomials, we provide a heuristic derivation of the Jack superpolynomial case by performing simple manipulations on the supersymmetric eigen-operators, rendering them independent of the number of particles and of the fermionic degree. In addition, we work out the expression of the Hamiltonian which characterizes the double Jacks. This Hamiltonian, which defines a new integrable system, involves not only the expected Calogero-Sutherland pieces but also combinations of the generators of an underlying affine {widehat{sl}_2} algebra.

  13. Multiple zeros of polynomials

    NASA Technical Reports Server (NTRS)

    Wood, C. A.

    1974-01-01

    For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.

  14. Periodic wave, breather wave and travelling wave solutions of a (2 + 1)-dimensional B-type Kadomtsev-Petviashvili equation in fluids or plasmas

    NASA Astrophysics Data System (ADS)

    Hu, Wen-Qiang; Gao, Yi-Tian; Jia, Shu-Liang; Huang, Qian-Min; Lan, Zhong-Zhou

    2016-11-01

    In this paper, a (2 + 1)-dimensional B-type Kadomtsev-Petviashvili equation is investigated, which has been presented as a model for the shallow water wave in fluids or the electrostatic wave potential in plasmas. By virtue of the binary Bell polynomials, the bilinear form of this equation is obtained. With the aid of the bilinear form, N -soliton solutions are obtained by the Hirota method, periodic wave solutions are constructed via the Riemann theta function, and breather wave solutions are obtained according to the extended homoclinic test approach. Travelling waves are constructed by the polynomial expansion method as well. Then, the relations between soliton solutions and periodic wave solutions are strictly established, which implies the asymptotic behaviors of the periodic waves under a limited procedure. Furthermore, we obtain some new solutions of this equation by the standard extended homoclinic test approach. Finally, we give a generalized form of this equation, and find that similar analytical solutions can be obtained from the generalized equation with arbitrary coefficients.

  15. On combination of strict Bayesian principles with model reduction technique or how stochastic model calibration can become feasible for large-scale applications

    NASA Astrophysics Data System (ADS)

    Oladyshkin, S.; Schroeder, P.; Class, H.; Nowak, W.

    2013-12-01

    Predicting underground carbon dioxide (CO2) storage represents a challenging problem in a complex dynamic system. Due to lacking information about reservoir parameters, quantification of uncertainties may become the dominant question in risk assessment. Calibration on past observed data from pilot-scale test injection can improve the predictive power of the involved geological, flow, and transport models. The current work performs history matching to pressure time series from a pilot storage site operated in Europe, maintained during an injection period. Simulation of compressible two-phase flow and transport (CO2/brine) in the considered site is computationally very demanding, requiring about 12 days of CPU time for an individual model run. For that reason, brute-force approaches for calibration are not feasible. In the current work, we explore an advanced framework for history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. The aPC [1] offers a drastic but accurate stochastic model reduction. Unlike many previous chaos expansions, it can handle arbitrary probability distribution shapes of uncertain parameters, and can therefore handle directly the statistical information appearing during the matching procedure. We capture the dependence of model output on these multipliers with the expansion-based reduced model. In our study we keep the spatial heterogeneity suggested by geophysical methods, but consider uncertainty in the magnitude of permeability trough zone-wise permeability multipliers. Next combined the aPC with Bootstrap filtering (a brute-force but fully accurate Bayesian updating mechanism) in order to perform the matching. In comparison to (Ensemble) Kalman Filters, our method accounts for higher-order statistical moments and for the non-linearity of both the forward model and the inversion, and thus allows a rigorous quantification of calibrated model uncertainty. The usually high computational costs of accurate filtering become very feasible for our suggested aPC-based calibration framework. However, the power of aPC-based Bayesian updating strongly depends on the accuracy of prior information. In the current study, the prior assumptions on the model parameters were not satisfactory and strongly underestimate the reservoir pressure. Thus, the aPC-based response surface used in Bootstrap filtering is fitted to a distant and poorly chosen region within the parameter space. Thanks to the iterative procedure suggested in [2] we overcome this drawback with small computational costs. The iteration successively improves the accuracy of the expansion around the current estimation of the posterior distribution. The final result is a calibrated model of the site that can be used for further studies, with an excellent match to the data. References [1] Oladyshkin S. and Nowak W. Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion. Reliability Engineering and System Safety, 106:179-190, 2012. [2] Oladyshkin S., Class H., Nowak W. Bayesian updating via Bootstrap filtering combined with data-driven polynomial chaos expansions: methodology and application to history matching for carbon dioxide storage in geological formations. Computational Geosciences, 17 (4), 671-687, 2013.

  16. An algorithm for the numerical evaluation of the associated Legendre functions that runs in time independent of degree and order

    NASA Astrophysics Data System (ADS)

    Bremer, James

    2018-05-01

    We describe a method for the numerical evaluation of normalized versions of the associated Legendre functions Pν- μ and Qν- μ of degrees 0 ≤ ν ≤ 1, 000, 000 and orders - ν ≤ μ ≤ ν for arguments in the interval (- 1 , 1). Our algorithm, which runs in time independent of ν and μ, is based on the fact that while the associated Legendre functions themselves are extremely expensive to represent via polynomial expansions, the logarithms of certain solutions of the differential equation defining them are not. We exploit this by numerically precomputing the logarithms of carefully chosen solutions of the associated Legendre differential equation and representing them via piecewise trivariate Chebyshev expansions. These precomputed expansions, which allow for the rapid evaluation of the associated Legendre functions over a large swath of parameter domain mentioned above, are supplemented with asymptotic and series expansions in order to cover it entirely. The results of numerical experiments demonstrating the efficacy of our approach are presented, and our code for evaluating the associated Legendre functions is publicly available.

  17. Scattering of electromagnetic plane wave from a perfect electric conducting strip placed at interface of topological insulator-chiral medium

    NASA Astrophysics Data System (ADS)

    Shoukat, Sobia; Naqvi, Qaisar A.

    2016-12-01

    In this manuscript, scattering from a perfect electric conducting strip located at planar interface of topological insulator (TI)-chiral medium is investigated using the Kobayashi Potential method. Longitudinal components of electric and magnetic vector potential in terms of unknown weighting function are considered. Use of related set of boundary conditions yields two algebraic equations and four dual integral equations (DIEs). Integrand of two DIEs are expanded in terms of the characteristic functions with expansion coefficients which must satisfy, simultaneously, the discontinuous property of the Weber-Schafheitlin integrals, required edge and boundary conditions. The resulting expressions are then combined with algebraic equations to express the weighting function in terms of expansion coefficients, these expansion coefficients are then substituted in remaining DIEs. The projection is applied using the Jacobi polynomials. This treatment yields matrix equation for expansion coefficients which is solved numerically. These unknown expansion coefficients are used to find the scattered field. The far zone scattering width is investigated with respect to different parameters of the geometry, i.e, chirality of chiral medium, angle of incidence, size of the strip. Significant effects of different parameters including TI parameter on the scattering width are noted.

  18. Stochastic Galerkin methods for the steady-state Navier–Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sousedík, Bedřich, E-mail: sousedik@umbc.edu; Elman, Howard C., E-mail: elman@cs.umd.edu

    2016-07-01

    We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less

  19. A method for the investigation of hyperbolic motions in the gravitational field of a spheroidal planet

    NASA Astrophysics Data System (ADS)

    Konks, V. Ia.

    1981-05-01

    Barrar's (1961) method for the analysis of the motion of a satellite of an oblate planet is extended to the case of hyperbolic motion. An analysis is presented of the motion of a material point in the gravitational field of a fixed center, combined with a gravitational dipole located at the point of inertia of a dynamically symmetric planet. Formulas are obtained for the hyperbolic motion of a spacecraft in the gravitational field of a spheroidal planet with an accuracy up to the second zonal harmonic of the expansion of its potential into a Legendre polynomial series in spherical coordinates.

  20. Stochastic Galerkin methods for the steady-state Navier–Stokes equations

    DOE PAGES

    Sousedík, Bedřich; Elman, Howard C.

    2016-04-12

    We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less

  1. Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.

  2. Method for Expressing Clinical and Statistical Significance of Ocular and Corneal Wavefront Error Aberrations

    PubMed Central

    Smolek, Michael K.

    2011-01-01

    Purpose The significance of ocular or corneal aberrations may be subject to misinterpretation whenever eyes with different pupil sizes or the application of different Zernike expansion orders are compared. A method is shown that uses simple mathematical interpolation techniques based on normal data to rapidly determine the clinical significance of aberrations, without concern for pupil and expansion order. Methods Corneal topography (Tomey, Inc.; Nagoya, Japan) from 30 normal corneas was collected and the corneal wavefront error analyzed by Zernike polynomial decomposition into specific aberration types for pupil diameters of 3, 5, 7, and 10 mm and Zernike expansion orders of 6, 8, 10 and 12. Using this 4×4 matrix of pupil sizes and fitting orders, best-fitting 3-dimensional functions were determined for the mean and standard deviation of the RMS error for specific aberrations. The functions were encoded into software to determine the significance of data acquired from non-normal cases. Results The best-fitting functions for 6 types of aberrations were determined: defocus, astigmatism, prism, coma, spherical aberration, and all higher-order aberrations. A clinical screening method of color-coding the significance of aberrations in normal, postoperative LASIK, and keratoconus cases having different pupil sizes and different expansion orders is demonstrated. Conclusions A method to calibrate wavefront aberrometry devices by using a standard sample of normal cases was devised. This method could be potentially useful in clinical studies involving patients with uncontrolled pupil sizes or in studies that compare data from aberrometers that use different Zernike fitting-order algorithms. PMID:22157570

  3. Are Khovanov-Rozansky polynomials consistent with evolution in the space of knots?

    NASA Astrophysics Data System (ADS)

    Anokhina, A.; Morozov, A.

    2018-04-01

    R-coloured knot polynomials for m-strand torus knots Torus [ m, n] are described by the Rosso-Jones formula, which is an example of evolution in n with Lyapunov exponents, labelled by Young diagrams from R ⊗ m . This means that they satisfy a finite-difference equation (recursion) of finite degree. For the gauge group SL( N ) only diagrams with no more than N lines can contribute and the recursion degree is reduced. We claim that these properties (evolution/recursion and reduction) persist for Khovanov-Rozansky (KR) polynomials, obtained by additional factorization modulo 1 + t, which is not yet adequately described in quantum field theory. Also preserved is some weakened version of differential expansion, which is responsible at least for a simple relation between reduced and unreduced Khovanov polynomials. However, in the KR case evolution is incompatible with the mirror symmetry under the change n -→ - n, what can signal about an ambiguity in the KR factorization even for torus knots.

  4. A well-posed and stable stochastic Galerkin formulation of the incompressible Navier–Stokes equations with random data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pettersson, Per, E-mail: per.pettersson@uib.no; Nordström, Jan, E-mail: jan.nordstrom@liu.se; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2016-02-01

    We present a well-posed stochastic Galerkin formulation of the incompressible Navier–Stokes equations with uncertainty in model parameters or the initial and boundary conditions. The stochastic Galerkin method involves representation of the solution through generalized polynomial chaos expansion and projection of the governing equations onto stochastic basis functions, resulting in an extended system of equations. A relatively low-order generalized polynomial chaos expansion is sufficient to capture the stochastic solution for the problem considered. We derive boundary conditions for the continuous form of the stochastic Galerkin formulation of the velocity and pressure equations. The resulting problem formulation leads to an energy estimatemore » for the divergence. With suitable boundary data on the pressure and velocity, the energy estimate implies zero divergence of the velocity field. Based on the analysis of the continuous equations, we present a semi-discretized system where the spatial derivatives are approximated using finite difference operators with a summation-by-parts property. With a suitable choice of dissipative boundary conditions imposed weakly through penalty terms, the semi-discrete scheme is shown to be stable. Numerical experiments in the laminar flow regime corroborate the theoretical results and we obtain high-order accurate results for the solution variables and the velocity divergence converges to zero as the mesh is refined.« less

  5. Distribution functions of probabilistic automata

    NASA Technical Reports Server (NTRS)

    Vatan, F.

    2001-01-01

    Each probabilistic automaton M over an alphabet A defines a probability measure Prob sub(M) on the set of all finite and infinite words over A. We can identify a k letter alphabet A with the set {0, 1,..., k-1}, and, hence, we can consider every finite or infinite word w over A as a radix k expansion of a real number X(w) in the interval [0, 1]. This makes X(w) a random variable and the distribution function of M is defined as usual: F(x) := Prob sub(M) { w: X(w) < x }. Utilizing the fixed-point semantics (denotational semantics), extended to probabilistic computations, we investigate the distribution functions of probabilistic automata in detail. Automata with continuous distribution functions are characterized. By a new, and much more easier method, it is shown that the distribution function F(x) is an analytic function if it is a polynomial. Finally, answering a question posed by D. Knuth and A. Yao, we show that a polynomial distribution function F(x) on [0, 1] can be generated by a prob abilistic automaton iff all the roots of F'(x) = 0 in this interval, if any, are rational numbers. For this, we define two dynamical systems on the set of polynomial distributions and study attracting fixed points of random composition of these two systems.

  6. Efficient modeling of photonic crystals with local Hermite polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boucher, C. R.; Li, Zehao; Albrecht, J. D.

    2014-04-21

    Developing compact algorithms for accurate electrodynamic calculations with minimal computational cost is an active area of research given the increasing complexity in the design of electromagnetic composite structures such as photonic crystals, metamaterials, optical interconnects, and on-chip routing. We show that electric and magnetic (EM) fields can be calculated using scalar Hermite interpolation polynomials as the numerical basis functions without having to invoke edge-based vector finite elements to suppress spurious solutions or to satisfy boundary conditions. This approach offers several fundamental advantages as evidenced through band structure solutions for periodic systems and through waveguide analysis. Compared with reciprocal space (planemore » wave expansion) methods for periodic systems, advantages are shown in computational costs, the ability to capture spatial complexity in the dielectric distributions, the demonstration of numerical convergence with scaling, and variational eigenfunctions free of numerical artifacts that arise from mixed-order real space basis sets or the inherent aberrations from transforming reciprocal space solutions of finite expansions. The photonic band structure of a simple crystal is used as a benchmark comparison and the ability to capture the effects of spatially complex dielectric distributions is treated using a complex pattern with highly irregular features that would stress spatial transform limits. This general method is applicable to a broad class of physical systems, e.g., to semiconducting lasers which require simultaneous modeling of transitions in quantum wells or dots together with EM cavity calculations, to modeling plasmonic structures in the presence of EM field emissions, and to on-chip propagation within monolithic integrated circuits.« less

  7. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    NASA Astrophysics Data System (ADS)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  8. Free vibration of rectangular plates with a small initial curvature

    NASA Technical Reports Server (NTRS)

    Adeniji-Fashola, A. A.; Oyediran, A. A.

    1988-01-01

    The method of matched asymptotic expansions is used to solve the transverse free vibration of a slightly curved, thin rectangular plate. Analytical results for natural frequencies and mode shapes are presented in the limit when the dimensionless bending rigidity, epsilon, is small compared with in-plane forces. Results for different boundary conditions are obtained when the initial deflection is: (1) a polynomial in both directions, and (2) the product of a polynomial and a trigonometric function, and arbitrary. For the arbitrary initial deflection case, the Fourier series technique is used to define the initial deflection. The results obtained show that the natural frequencies of vibration of slightly curved plates are coincident with those of perfectly flat, prestressed rectangular plates. However, the eigenmodes are very different from those of initially flat prestressed rectangular plates. The total deflection is found to be the sum of the initial deflection, the deflection resulting from the solution of the flat plate problem, and the deflection resulting from the static problem.

  9. Ligand Electron Density Shape Recognition Using 3D Zernike Descriptors

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Prasad; Grandison, Scott; Cowtan, Kevin; Mak, Lora; Lawson, David M.; Morris, Richard J.

    We present a novel approach to crystallographic ligand density interpretation based on Zernike shape descriptors. Electron density for a bound ligand is expanded in an orthogonal polynomial series (3D Zernike polynomials) and the coefficients from this expansion are employed to construct rotation-invariant descriptors. These descriptors can be compared highly efficiently against large databases of descriptors computed from other molecules. In this manuscript we describe this process and show initial results from an electron density interpretation study on a dataset containing over a hundred OMIT maps. We could identify the correct ligand as the first hit in about 30 % of the cases, within the top five in a further 30 % of the cases, and giving rise to an 80 % probability of getting the correct ligand within the top ten matches. In all but a few examples, the top hit was highly similar to the correct ligand in both shape and chemistry. Further extensions and intrinsic limitations of the method are discussed.

  10. Solving the interval type-2 fuzzy polynomial equation using the ranking method

    NASA Astrophysics Data System (ADS)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim

    2014-07-01

    Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.

  11. Topology of Large-Scale Structures of Galaxies in two Dimensions—Systematic Effects

    NASA Astrophysics Data System (ADS)

    Appleby, Stephen; Park, Changbom; Hong, Sungwook E.; Kim, Juhan

    2017-02-01

    We study the two-dimensional topology of galactic distribution when projected onto two-dimensional spherical shells. Using the latest Horizon Run 4 simulation data, we construct the genus of the two-dimensional field and consider how this statistic is affected by late-time nonlinear effects—principally gravitational collapse and redshift space distortion (RSD). We also consider systematic and numerical artifacts, such as shot noise, galaxy bias, and finite pixel effects. We model the systematics using a Hermite polynomial expansion and perform a comprehensive analysis of known effects on the two-dimensional genus, with a view toward using the statistic for cosmological parameter estimation. We find that the finite pixel effect is dominated by an amplitude drop and can be made less than 1% by adopting pixels smaller than 1/3 of the angular smoothing length. Nonlinear gravitational evolution introduces time-dependent coefficients of the zeroth, first, and second Hermite polynomials, but the genus amplitude changes by less than 1% between z = 1 and z = 0 for smoothing scales {R}{{G}}> 9 {Mpc}/{{h}}. Non-zero terms are measured up to third order in the Hermite polynomial expansion when studying RSD. Differences in the shapes of the genus curves in real and redshift space are small when we adopt thick redshift shells, but the amplitude change remains a significant ˜ { O }(10 % ) effect. The combined effects of galaxy biasing and shot noise produce systematic effects up to the second Hermite polynomial. It is shown that, when sampling, the use of galaxy mass cuts significantly reduces the effect of shot noise relative to random sampling.

  12. Density, Viscosity and Surface Tension of Binary Mixtures of 1-Butyl-1-Methylpyrrolidinium Tricyanomethanide with Benzothiophene.

    PubMed

    Domańska, Urszula; Królikowska, Marta; Walczak, Klaudia

    2014-01-01

    The effects of temperature and composition on the density and viscosity of pure benzothiophene and ionic liquid (IL), and those of the binary mixtures containing the IL 1-butyl-1-methylpyrrolidynium tricyanomethanide ([BMPYR][TCM] + benzothiophene), are reported at six temperatures (308.15, 318.15, 328.15, 338.15, 348.15 and 358.15) K and ambient pressure. The temperature dependences of the density and viscosity were represented by an empirical second-order polynomial and by the Vogel-Fucher-Tammann equation, respectively. The density and viscosity variations with compositions were described by polynomials. Excess molar volumes and viscosity deviations were calculated and correlated by Redlich-Kister polynomial expansions. The surface tensions of benzothiophene, pure IL and binary mixtures of ([BMPYR][TCM] + benzothiophene) were measured at atmospheric pressure at four temperatures (308.15, 318.15, 328.15 and 338.15) K. The surface tension deviations were calculated and correlated by a Redlich-Kister polynomial expansion. The temperature dependence of the interfacial tension was used to evaluate the surface entropy, the surface enthalpy, the critical temperature, the surface energy and the parachor for pure IL. These measurements have been provided to complete information of the influence of temperature and composition on physicochemical properties for the selected IL, which was chosen as a possible new entrainer in the separation of sulfur compounds from fuels. A qualitative analysis on these quantities in terms of molecular interactions is reported. The obtained results indicate that IL interactions with benzothiophene are strongly dependent on packing effects and hydrogen bonding of this IL with the polar solvent.

  13. Study on the mapping of dark matter clustering from real space to redshift space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yi; Song, Yong-Seon, E-mail: yizheng@kasi.re.kr, E-mail: ysong@kasi.re.kr

    The mapping of dark matter clustering from real space to redshift space introduces the anisotropic property to the measured density power spectrum in redshift space, known as the redshift space distortion effect. The mapping formula is intrinsically non-linear, which is complicated by the higher order polynomials due to indefinite cross correlations between the density and velocity fields, and the Finger-of-God effect due to the randomness of the peculiar velocity field. Whilst the full higher order polynomials remain unknown, the other systematics can be controlled consistently within the same order truncation in the expansion of the mapping formula, as shown inmore » this paper. The systematic due to the unknown non-linear density and velocity fields is removed by separately measuring all terms in the expansion directly using simulations. The uncertainty caused by the velocity randomness is controlled by splitting the FoG term into two pieces, 1) the ''one-point' FoG term being independent of the separation vector between two different points, and 2) the ''correlated' FoG term appearing as an indefinite polynomials which is expanded in the same order as all other perturbative polynomials. Using 100 realizations of simulations, we find that the Gaussian FoG function with only one scale-independent free parameter works quite well, and that our new mapping formulation accurately reproduces the observed 2-dimensional density power spectrum in redshift space at the smallest scales by far, up to k ∼ 0.2 Mpc{sup -1}, considering the resolution of future experiments.« less

  14. An efficient algorithm for building locally refined hp - adaptive H-PCFE: Application to uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2017-12-01

    Hybrid polynomial correlated function expansion (H-PCFE) is a novel metamodel formulated by coupling polynomial correlated function expansion (PCFE) and Kriging. Unlike commonly available metamodels, H-PCFE performs a bi-level approximation and hence, yields more accurate results. However, till date, it is only applicable to medium scaled problems. In order to address this apparent void, this paper presents an improved H-PCFE, referred to as locally refined hp - adaptive H-PCFE. The proposed framework computes the optimal polynomial order and important component functions of PCFE, which is an integral part of H-PCFE, by using global variance based sensitivity analysis. Optimal number of training points are selected by using distribution adaptive sequential experimental design. Additionally, the formulated model is locally refined by utilizing the prediction error, which is inherently obtained in H-PCFE. Applicability of the proposed approach has been illustrated with two academic and two industrial problems. To illustrate the superior performance of the proposed approach, results obtained have been compared with those obtained using hp - adaptive PCFE. It is observed that the proposed approach yields highly accurate results. Furthermore, as compared to hp - adaptive PCFE, significantly less number of actual function evaluations are required for obtaining results of similar accuracy.

  15. A sparse matrix-vector multiplication based algorithm for accurate density matrix computations on systems of millions of atoms

    NASA Astrophysics Data System (ADS)

    Ghale, Purnima; Johnson, Harley T.

    2018-06-01

    We present an efficient sparse matrix-vector (SpMV) based method to compute the density matrix P from a given Hamiltonian in electronic structure computations. Our method is a hybrid approach based on Chebyshev-Jackson approximation theory and matrix purification methods like the second order spectral projection purification (SP2). Recent methods to compute the density matrix scale as O(N) in the number of floating point operations but are accompanied by large memory and communication overhead, and they are based on iterative use of the sparse matrix-matrix multiplication kernel (SpGEMM), which is known to be computationally irregular. In addition to irregularity in the sparse Hamiltonian H, the nonzero structure of intermediate estimates of P depends on products of H and evolves over the course of computation. On the other hand, an expansion of the density matrix P in terms of Chebyshev polynomials is straightforward and SpMV based; however, the resulting density matrix may not satisfy the required constraints exactly. In this paper, we analyze the strengths and weaknesses of the Chebyshev-Jackson polynomials and the second order spectral projection purification (SP2) method, and propose to combine them so that the accurate density matrix can be computed using the SpMV computational kernel only, and without having to store the density matrix P. Our method accomplishes these objectives by using the Chebyshev polynomial estimate as the initial guess for SP2, which is followed by using sparse matrix-vector multiplications (SpMVs) to replicate the behavior of the SP2 algorithm for purification. We demonstrate the method on a tight-binding model system of an oxide material containing more than 3 million atoms. In addition, we also present the predicted behavior of our method when applied to near-metallic Hamiltonians with a wide energy spectrum.

  16. PV O&M Cost Model and Cost Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Andy

    This is a presentation on PV O&M cost model and cost reduction for the annual Photovoltaic Reliability Workshop (2017), covering estimating PV O&M costs, polynomial expansion, and implementation of Net Present Value (NPV) and reserve account in cost models.

  17. Rational approximation to e to the -x power with negative poles

    NASA Technical Reports Server (NTRS)

    Cuthill, E.

    1977-01-01

    MACSYMA was applied to the generation of an expansion in terms of Laguerre polynomials to obtain approximations to e to the -x power on 0, infinity. These approximations are compared with those developed by Saff, Schonhage, and Varga.

  18. New methods in the Newtonian potential theory. I - The representation of the potential energy of homogeneous gravitating bodies by converging bodies

    NASA Astrophysics Data System (ADS)

    Kondrat'ev, B. P.

    1993-06-01

    A method is developed for the representation of the potential energy of homogeneous gravitating, as well as electrically charged, bodies in the form of special series. These series contain members consisting of products of the corresponding coefficients appearing in the expansion of external and internal Newtonian potentials in Legendre polynomial series. Several versions of the representation of potential energy through these series are possible. A formula which expresses potential energy not as a volume integral, as is the convention, but as an integral over the body surface is derived. The method is tested for the particular cases of sphere and ellipsoid, and the convergence of the found series is shown.

  19. Quantum Hurwitz numbers and Macdonald polynomials

    NASA Astrophysics Data System (ADS)

    Harnad, J.

    2016-11-01

    Parametric families in the center Z(C[Sn]) of the group algebra of the symmetric group are obtained by identifying the indeterminates in the generating function for Macdonald polynomials as commuting Jucys-Murphy elements. Their eigenvalues provide coefficients in the double Schur function expansion of 2D Toda τ-functions of hypergeometric type. Expressing these in the basis of products of power sum symmetric functions, the coefficients may be interpreted geometrically as parametric families of quantum Hurwitz numbers, enumerating weighted branched coverings of the Riemann sphere. Combinatorially, they give quantum weighted sums over paths in the Cayley graph of Sn generated by transpositions. Dual pairs of bases for the algebra of symmetric functions with respect to the scalar product in which the Macdonald polynomials are orthogonal provide both the geometrical and combinatorial significance of these quantum weighted enumerative invariants.

  20. LQR Control of Thin Shell Dynamics: Formulation and Numerical Implementation

    NASA Technical Reports Server (NTRS)

    delRosario, R. C. H.; Smith, R. C.

    1997-01-01

    A PDE-based feedback control method for thin cylindrical shells with surface-mounted piezoceramic actuators is presented. Donnell-Mushtari equations modified to incorporate both passive and active piezoceramic patch contributions are used to model the system dynamics. The well-posedness of this model and the associated LQR problem with an unbounded input operator are established through analytic semigroup theory. The model is discretized using a Galerkin expansion with basis functions constructed from Fourier polynomials tensored with cubic splines, and convergence criteria for the associated approximate LQR problem are established. The effectiveness of the method for attenuating the coupled longitudinal, circumferential and transverse shell displacements is illustrated through a set of numerical examples.

  1. Instability of the cored barotropic disc: the linear eigenvalue formulation

    NASA Astrophysics Data System (ADS)

    Polyachenko, E. V.

    2018-05-01

    Gaseous rotating razor-thin discs are a testing ground for theories of spiral structure that try to explain appearance and diversity of disc galaxy patterns. These patterns are believed to arise spontaneously under the action of gravitational instability, but calculations of its characteristics in the gas are mostly obscured. The paper suggests a new method for finding the spiral patterns based on an expansion of small amplitude perturbations over Lagrange polynomials in small radial elements. The final matrix equation is extracted from the original hydrodynamical equations without the use of an approximate theory and has a form of the linear algebraic eigenvalue problem. The method is applied to a galactic model with the cored exponential density profile.

  2. Nonclassical models of the theory of plates and shells

    NASA Astrophysics Data System (ADS)

    Annin, Boris D.; Volchkov, Yuri M.

    2017-11-01

    Publications dealing with the study of methods of reducing a three-dimensional problem of the elasticity theory to a two-dimensional problem of the theory of plates and shells are reviewed. Two approaches are considered: the use of kinematic and force hypotheses and expansion of solutions of the three-dimensional elasticity theory in terms of the complete system of functions. Papers where a three-dimensional problem is reduced to a two-dimensional problem with the use of several approximations of each of the unknown functions (stresses and displacements) by segments of the Legendre polynomials are also reviewed.

  3. Stable Numerical Approach for Fractional Delay Differential Equations

    NASA Astrophysics Data System (ADS)

    Singh, Harendra; Pandey, Rajesh K.; Baleanu, D.

    2017-12-01

    In this paper, we present a new stable numerical approach based on the operational matrix of integration of Jacobi polynomials for solving fractional delay differential equations (FDDEs). The operational matrix approach converts the FDDE into a system of linear equations, and hence the numerical solution is obtained by solving the linear system. The error analysis of the proposed method is also established. Further, a comparative study of the approximate solutions is provided for the test examples of the FDDE by varying the values of the parameters in the Jacobi polynomials. As in special case, the Jacobi polynomials reduce to the well-known polynomials such as (1) Legendre polynomial, (2) Chebyshev polynomial of second kind, (3) Chebyshev polynomial of third and (4) Chebyshev polynomial of fourth kind respectively. Maximum absolute error and root mean square error are calculated for the illustrated examples and presented in form of tables for the comparison purpose. Numerical stability of the presented method with respect to all four kind of polynomials are discussed. Further, the obtained numerical results are compared with some known methods from the literature and it is observed that obtained results from the proposed method is better than these methods.

  4. Mixed Legendre moments and discrete scattering cross sections for anisotropy representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calloo, A.; Vidal, J. F.; Le Tellier, R.

    2012-07-01

    This paper deals with the resolution of the integro-differential form of the Boltzmann transport equation for neutron transport in nuclear reactors. In multigroup theory, deterministic codes use transfer cross sections which are expanded on Legendre polynomials. This modelling leads to negative values of the transfer cross section for certain scattering angles, and hence, the multigroup scattering source term is wrongly computed. The first part compares the convergence of 'Legendre-expanded' cross sections with respect to the order used with the method of characteristics (MOC) for Pressurised Water Reactor (PWR) type cells. Furthermore, the cross section is developed using piecewise-constant functions, whichmore » better models the multigroup transfer cross section and prevents the occurrence of any negative value for it. The second part focuses on the method of solving the transport equation with the above-mentioned piecewise-constant cross sections for lattice calculations for PWR cells. This expansion thereby constitutes a 'reference' method to compare the conventional Legendre expansion to, and to determine its pertinence when applied to reactor physics calculations. (authors)« less

  5. Performance analysis of 60-min to 1-min integration time rain rate conversion models in Malaysia

    NASA Astrophysics Data System (ADS)

    Ng, Yun-Yann; Singh, Mandeep Singh Jit; Thiruchelvam, Vinesh

    2018-01-01

    Utilizing the frequency band above 10 GHz is in focus nowadays as a result of the fast expansion of radio communication systems in Malaysia. However, rain fade is the critical factor in attenuation of signal propagation for frequencies above 10 GHz. Malaysia is located in a tropical and equatorial region with high rain intensity throughout the year, and this study will review rain distribution and evaluate the performance of 60-min to 1-min integration time rain rate conversion methods for Malaysia. Several conversion methods such as Segal, Chebil & Rahman, Burgeono, Emiliani, Lavergnat and Gole (LG), Simplified Moupfouma, Joo et al., fourth order polynomial fit and logarithmic model have been chosen to evaluate the performance to predict 1-min rain rate for 10 sites in Malaysia. After the completion of this research, the results show that Chebil & Rahman model, Lavergnat & Gole model, Fourth order polynomial fit and Logarithmic model have shown the best performances in 60-min to 1-min rain rate conversion over 10 sites. In conclusion, it is proven that there is no single model which can claim to perform the best across 10 sites. By averaging RMSE and SC-RMSE over 10 sites, Chebil and Rahman model is the best method.

  6. Atomic Gaussian type orbitals and their Fourier transforms via the Rayleigh expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yükçü, Niyazi

    Gaussian type orbitals (GTOs), which are one of the types of exponential type orbitals (ETOs), are used usually as basis functions in the multi-center atomic and molecular integrals to better understand physical and chemical properties of matter. In the Fourier transform method (FTM), basis functions have not simplicity to make mathematical operations, but their Fourier transforms are easier to use. In this work, with the help of FTM, Rayleigh expansion and some properties of unnormalized GTOs, we present new mathematical results for the Fourier transform of GTOs in terms of Laguerre polynomials, hypergeometric and Whittaker functions. Physical and analytical propertiesmore » of GTOs are discussed and some numerical results have been given in a table. Finally, we compare our mathematical results with the other known literature results by using a computer program and details of evaluation are presented.« less

  7. Convergence of moment expansions for expectation values with embedded random matrix ensembles and quantum chaos

    NASA Astrophysics Data System (ADS)

    Kota, V. K. B.

    2003-07-01

    Smoothed forms for expectation values < K> E of positive definite operators K follow from the K-density moments either directly or in many other ways each giving a series expansion (involving polynomials in E). In large spectroscopic spaces one has to partition the many particle spaces into subspaces. Partitioning leads to new expansions for expectation values. It is shown that all the expansions converge to compact forms depending on the nature of the operator K and the operation of embedded random matrix ensembles and quantum chaos in many particle spaces. Explicit results are given for occupancies < ni> E, spin-cutoff factors < JZ2> E and strength sums < O†O> E, where O is a one-body transition operator.

  8. Quadratically Convergent Method for Simultaneously Approaching the Roots of Polynomial Solutions of a Class of Differential Equations

    NASA Astrophysics Data System (ADS)

    Recchioni, Maria Cristina

    2001-12-01

    This paper investigates the application of the method introduced by L. Pasquini (1989) for simultaneously approaching the zeros of polynomial solutions to a class of second-order linear homogeneous ordinary differential equations with polynomial coefficients to a particular case in which these polynomial solutions have zeros symmetrically arranged with respect to the origin. The method is based on a family of nonlinear equations which is associated with a given class of differential equations. The roots of the nonlinear equations are related to the roots of the polynomial solutions of differential equations considered. Newton's method is applied to find the roots of these nonlinear equations. In (Pasquini, 1994) the nonsingularity of the roots of these nonlinear equations is studied. In this paper, following the lines in (Pasquini, 1994), the nonsingularity of the roots of these nonlinear equations is studied. More favourable results than the ones in (Pasquini, 1994) are proven in the particular case of polynomial solutions with symmetrical zeros. The method is applied to approximate the roots of Hermite-Sobolev type polynomials and Freud polynomials. A lower bound for the smallest positive root of Hermite-Sobolev type polynomials is given via the nonlinear equation. The quadratic convergence of the method is proven. A comparison with a classical method that uses the Jacobi matrices is carried out. We show that the algorithm derived by the proposed method is sometimes preferable to the classical QR type algorithms for computing the eigenvalues of the Jacobi matrices even if these matrices are real and symmetric.

  9. Hermite Functional Link Neural Network for Solving the Van der Pol-Duffing Oscillator Equation.

    PubMed

    Mall, Susmita; Chakraverty, S

    2016-08-01

    Hermite polynomial-based functional link artificial neural network (FLANN) is proposed here to solve the Van der Pol-Duffing oscillator equation. A single-layer hermite neural network (HeNN) model is used, where a hidden layer is replaced by expansion block of input pattern using Hermite orthogonal polynomials. A feedforward neural network model with the unsupervised error backpropagation principle is used for modifying the network parameters and minimizing the computed error function. The Van der Pol-Duffing and Duffing oscillator equations may not be solved exactly. Here, approximate solutions of these types of equations have been obtained by applying the HeNN model for the first time. Three mathematical example problems and two real-life application problems of Van der Pol-Duffing oscillator equation, extracting the features of early mechanical failure signal and weak signal detection problems, are solved using the proposed HeNN method. HeNN approximate solutions have been compared with results obtained by the well known Runge-Kutta method. Computed results are depicted in term of graphs. After training the HeNN model, we may use it as a black box to get numerical results at any arbitrary point in the domain. Thus, the proposed HeNN method is efficient. The results reveal that this method is reliable and can be applied to other nonlinear problems too.

  10. On the Gibbs phenomenon 3: Recovering exponential accuracy in a sub-interval from a spectral partial sum of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1993-01-01

    The investigation of overcoming Gibbs phenomenon was continued, i.e., obtaining exponential accuracy at all points including at the discontinuities themselves, from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. It was shown that if we are given the first N expansion coefficients of an L(sub 2) function f(x) in terms of either the trigonometrical polynomials or the Chebyshev or Legendre polynomials, an exponentially convergent approximation to the point values of f(x) in any sub-interval in which it is analytic can be constructed.

  11. From r-spin intersection numbers to Hodge integrals

    NASA Astrophysics Data System (ADS)

    Ding, Xiang-Mao; Li, Yuping; Meng, Lingxian

    2016-01-01

    Generalized Kontsevich Matrix Model (GKMM) with a certain given potential is the partition function of r-spin intersection numbers. We represent this GKMM in terms of fermions and expand it in terms of the Schur polynomials by boson-fermion correspondence, and link it with a Hurwitz partition function and a Hodge partition by operators in a widehat{GL}(∞) group. Then, from a W 1+∞ constraint of the partition function of r-spin intersection numbers, we get a W 1+∞ constraint for the Hodge partition function. The W 1+∞ constraint completely determines the Schur polynomials expansion of the Hodge partition function.

  12. Sparse polynomial space approach to dissipative quantum systems: application to the sub-ohmic spin-boson model.

    PubMed

    Alvermann, A; Fehske, H

    2009-04-17

    We propose a general numerical approach to open quantum systems with a coupling to bath degrees of freedom. The technique combines the methodology of polynomial expansions of spectral functions with the sparse grid concept from interpolation theory. Thereby we construct a Hilbert space of moderate dimension to represent the bath degrees of freedom, which allows us to perform highly accurate and efficient calculations of static, spectral, and dynamic quantities using standard exact diagonalization algorithms. The strength of the approach is demonstrated for the phase transition, critical behavior, and dissipative spin dynamics in the spin-boson model.

  13. A Boussinesq-scaled, pressure-Poisson water wave model

    NASA Astrophysics Data System (ADS)

    Donahue, Aaron S.; Zhang, Yao; Kennedy, Andrew B.; Westerink, Joannes J.; Panda, Nishant; Dawson, Clint

    2015-02-01

    Through the use of Boussinesq scaling we develop and test a model for resolving non-hydrostatic pressure profiles in nonlinear wave systems over varying bathymetry. A Green-Nagdhi type polynomial expansion is used to resolve the pressure profile along the vertical axis, this is then inserted into the pressure-Poisson equation, retaining terms up to a prescribed order and solved using a weighted residual approach. The model shows rapid convergence properties with increasing order of polynomial expansion which can be greatly improved through the application of asymptotic rearrangement. Models of Boussinesq scaling of the fully nonlinear O (μ2) and weakly nonlinear O (μN) are presented, the analytical and numerical properties of O (μ2) and O (μ4) models are discussed. Optimal basis functions in the Green-Nagdhi expansion are determined through manipulation of the free-parameters which arise due to the Boussinesq scaling. The optimal O (μ2) model has dispersion accuracy equivalent to a Padé [2,2] approximation with one extra free-parameter. The optimal O (μ4) model obtains dispersion accuracy equivalent to a Padé [4,4] approximation with two free-parameters which can be used to optimize shoaling or nonlinear properties. In comparison to experimental results the O (μ4) model shows excellent agreement to experimental data.

  14. A model-based 3D phase unwrapping algorithm using Gegenbauer polynomials.

    PubMed

    Langley, Jason; Zhao, Qun

    2009-09-07

    The application of a two-dimensional (2D) phase unwrapping algorithm to a three-dimensional (3D) phase map may result in an unwrapped phase map that is discontinuous in the direction normal to the unwrapped plane. This work investigates the problem of phase unwrapping for 3D phase maps. The phase map is modeled as a product of three one-dimensional Gegenbauer polynomials. The orthogonality of Gegenbauer polynomials and their derivatives on the interval [-1, 1] are exploited to calculate the expansion coefficients. The algorithm was implemented using two well-known Gegenbauer polynomials: Chebyshev polynomials of the first kind and Legendre polynomials. Both implementations of the phase unwrapping algorithm were tested on 3D datasets acquired from a magnetic resonance imaging (MRI) scanner. The first dataset was acquired from a homogeneous spherical phantom. The second dataset was acquired using the same spherical phantom but magnetic field inhomogeneities were introduced by an external coil placed adjacent to the phantom, which provided an additional burden to the phase unwrapping algorithm. Then Gaussian noise was added to generate a low signal-to-noise ratio dataset. The third dataset was acquired from the brain of a human volunteer. The results showed that Chebyshev implementation and the Legendre implementation of the phase unwrapping algorithm give similar results on the 3D datasets. Both implementations of the phase unwrapping algorithm compare well to PRELUDE 3D, 3D phase unwrapping software well recognized for functional MRI.

  15. Investigation of Biotransport in a Tumor With Uncertain Material Properties Using a Nonintrusive Spectral Uncertainty Quantification Method.

    PubMed

    Alexanderian, Alen; Zhu, Liang; Salloum, Maher; Ma, Ronghui; Yu, Meilin

    2017-09-01

    In this study, statistical models are developed for modeling uncertain heterogeneous permeability and porosity in tumors, and the resulting uncertainties in pressure and velocity fields during an intratumoral injection are quantified using a nonintrusive spectral uncertainty quantification (UQ) method. Specifically, the uncertain permeability is modeled as a log-Gaussian random field, represented using a truncated Karhunen-Lòeve (KL) expansion, and the uncertain porosity is modeled as a log-normal random variable. The efficacy of the developed statistical models is validated by simulating the concentration fields with permeability and porosity of different uncertainty levels. The irregularity in the concentration field bears reasonable visual agreement with that in MicroCT images from experiments. The pressure and velocity fields are represented using polynomial chaos (PC) expansions to enable efficient computation of their statistical properties. The coefficients in the PC expansion are computed using a nonintrusive spectral projection method with the Smolyak sparse quadrature. The developed UQ approach is then used to quantify the uncertainties in the random pressure and velocity fields. A global sensitivity analysis is also performed to assess the contribution of individual KL modes of the log-permeability field to the total variance of the pressure field. It is demonstrated that the developed UQ approach can effectively quantify the flow uncertainties induced by uncertain material properties of the tumor.

  16. Numerical Methods for Nonlinear Fokker-Planck Collision Operator in TEMPEST

    NASA Astrophysics Data System (ADS)

    Kerbel, G.; Xiong, Z.

    2006-10-01

    Early implementations of Fokker-Planck collision operator and moment computations in TEMPEST used low order polynomial interpolation schemes to reuse conservative operators developed for speed/pitch-angle (v, θ) coordinates. When this approach proved to be too inaccurate we developed an alternative higher order interpolation scheme for the Rosenbluth potentials and a high order finite volume method in TEMPEST (,) coordinates. The collision operator is thus generated by using the expansion technique in (v, θ) coordinates for the diffusion coefficients only, and then the fluxes for the conservative differencing are computed directly in the TEMPEST (,) coordinates. Combined with a cut-cell treatment at the turning-point boundary, this new approach is shown to have much better accuracy and conservation properties.

  17. Efficient uncertainty quantification in fully-integrated surface and subsurface hydrologic simulations

    NASA Astrophysics Data System (ADS)

    Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.

    2018-01-01

    Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at 241 different times each. Numerical experiments show that polynomial chaos is an effective and robust method for quantifying uncertainty in fully-integrated hydrologic simulations, which provides a rich set of features and is computationally efficient. Our approach has the potential for significant speedup over existing sampling based methods when the number of uncertain model parameters is modest ( ≤ 20). To our knowledge, this is the first implementation of the algorithm in a comprehensive, fully-integrated, physically-based three-dimensional hydrosystem model.

  18. A stochastic approach to online vehicle state and parameter estimation, with application to inertia estimation for rollover prevention and battery charge/health estimation.

    DOT National Transportation Integrated Search

    2013-08-01

    This report summarizes research conducted at Penn State, Virginia Tech, and West Virginia University on the development of algorithms based on the generalized polynomial chaos (gpc) expansion for the online estimation of automotive and transportation...

  19. Efficient Computation of Sparse Matrix Functions for Large-Scale Electronic Structure Calculations: The CheSS Library.

    PubMed

    Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi

    2017-10-10

    We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.

  20. High-performance implementation of Chebyshev filter diagonalization for interior eigenvalue computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pieper, Andreas; Kreutzer, Moritz; Alvermann, Andreas, E-mail: alvermann@physik.uni-greifswald.de

    2016-11-15

    We study Chebyshev filter diagonalization as a tool for the computation of many interior eigenvalues of very large sparse symmetric matrices. In this technique the subspace projection onto the target space of wanted eigenvectors is approximated with filter polynomials obtained from Chebyshev expansions of window functions. After the discussion of the conceptual foundations of Chebyshev filter diagonalization we analyze the impact of the choice of the damping kernel, search space size, and filter polynomial degree on the computational accuracy and effort, before we describe the necessary steps towards a parallel high-performance implementation. Because Chebyshev filter diagonalization avoids the need formore » matrix inversion it can deal with matrices and problem sizes that are presently not accessible with rational function methods based on direct or iterative linear solvers. To demonstrate the potential of Chebyshev filter diagonalization for large-scale problems of this kind we include as an example the computation of the 10{sup 2} innermost eigenpairs of a topological insulator matrix with dimension 10{sup 9} derived from quantum physics applications.« less

  1. Classification of Normal and Apoptotic Cells from Fluorescence Microscopy Images Using Generalized Polynomial Chaos and Level Set Function.

    PubMed

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2016-06-01

    Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.

  2. Fock expansion of multimode pure Gaussian states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cariolaro, Gianfranco; Pierobon, Gianfranco, E-mail: gianfranco.pierobon@unipd.it

    2015-12-15

    The Fock expansion of multimode pure Gaussian states is derived starting from their representation as displaced and squeezed multimode vacuum states. The approach is new and appears to be simpler and more general than previous ones starting from the phase-space representation given by the characteristic or Wigner function. Fock expansion is performed in terms of easily evaluable two-variable Hermite–Kampé de Fériet polynomials. A relatively simple and compact expression for the joint statistical distribution of the photon numbers in the different modes is obtained. In particular, this result enables one to give a simple characterization of separable and entangled states, asmore » shown for two-mode and three-mode Gaussian states.« less

  3. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Huan; Yang, Xiu; Zheng, Bin

    Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Huan; Yang, Xiu; Zheng, Bin

    Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less

  6. Computing Galois Groups of Eisenstein Polynomials Over P-adic Fields

    NASA Astrophysics Data System (ADS)

    Milstead, Jonathan

    The most efficient algorithms for computing Galois groups of polynomials over global fields are based on Stauduhar's relative resolvent method. These methods are not directly generalizable to the local field case, since they require a field that contains the global field in which all roots of the polynomial can be approximated. We present splitting field-independent methods for computing the Galois group of an Eisenstein polynomial over a p-adic field. Our approach is to combine information from different disciplines. We primarily, make use of the ramification polygon of the polynomial, which is the Newton polygon of a related polynomial. This allows us to quickly calculate several invariants that serve to reduce the number of possible Galois groups. Algorithms by Greve and Pauli very efficiently return the Galois group of polynomials where the ramification polygon consists of one segment as well as information about the subfields of the stem field. Second, we look at the factorization of linear absolute resolvents to further narrow the pool of possible groups.

  7. Modeling State-Space Aeroelastic Systems Using a Simple Matrix Polynomial Approach for the Unsteady Aerodynamics

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.

    2008-01-01

    A simple matrix polynomial approach is introduced for approximating unsteady aerodynamics in the s-plane and ultimately, after combining matrix polynomial coefficients with matrices defining the structure, a matrix polynomial of the flutter equations of motion (EOM) is formed. A technique of recasting the matrix-polynomial form of the flutter EOM into a first order form is also presented that can be used to determine the eigenvalues near the origin and everywhere on the complex plane. An aeroservoelastic (ASE) EOM have been generalized to include the gust terms on the right-hand side. The reasons for developing the new matrix polynomial approach are also presented, which are the following: first, the "workhorse" methods such as the NASTRAN flutter analysis lack the capability to consistently find roots near the origin, along the real axis or accurately find roots farther away from the imaginary axis of the complex plane; and, second, the existing s-plane methods, such as the Roger s s-plane approximation method as implemented in ISAC, do not always give suitable fits of some tabular data of the unsteady aerodynamics. A method available in MATLAB is introduced that will accurately fit generalized aerodynamic force (GAF) coefficients in a tabular data form into the coefficients of a matrix polynomial form. The root-locus results from the NASTRAN pknl flutter analysis, the ISAC-Roger's s-plane method and the present matrix polynomial method are presented and compared for accuracy and for the number and locations of roots.

  8. Range Image Flow using High-Order Polynomial Expansion

    DTIC Science & Technology

    2013-09-01

    included as a default algorithm in the OpenCV library [2]. The research of estimating the motion between range images, or range flow, is much more...Journal of Computer Vision, vol. 92, no. 1, pp. 1‒31. 2. G. Bradski and A. Kaehler. 2008. Learning OpenCV : Computer Vision with the OpenCV Library

  9. Compact representation of continuous energy surfaces for more efficient protein design

    PubMed Central

    Hallen, Mark A.; Gainza, Pablo; Donald, Bruce R.

    2015-01-01

    In macromolecular design, conformational energies are sensitive to small changes in atom coordinates, so modeling the small, continuous motions of atoms around low-energy wells confers a substantial advantage in structural accuracy; however, modeling these motions comes at the cost of a very large number of energy function calls, which form the bottleneck in the design calculation. In this work, we remove this bottleneck by consolidating all conformational energy evaluations into the precomputation of a local polynomial expansion of the energy about the “ideal” conformation for each low-energy, “rotameric” state of each residue pair. This expansion is called Energy as Polynomials in Internal Coordinates (EPIC), where the internal coordinates can be sidechain dihedrals, backrub angles, and/or any other continuous degrees of freedom of a macromolecule, and any energy function can be used without adding any asymptotic complexity to the design. We demonstrate that EPIC efficiently represents the energy surface for both molecular-mechanics and quantum-mechanical energy functions, and apply it specifically to protein design to model both sidechain and backbone degrees of freedom. PMID:26089744

  10. NLSCIDNT user's guide maximum likehood parameter identification computer program with nonlinear rotorcraft model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.

  11. Efficient evaluation of the material response of tissues reinforced by statistically oriented fibres

    NASA Astrophysics Data System (ADS)

    Hashlamoun, Kotaybah; Grillo, Alfio; Federico, Salvatore

    2016-10-01

    For several classes of soft biological tissues, modelling complexity is in part due to the arrangement of the collagen fibres. In general, the arrangement of the fibres can be described by defining, at each point in the tissue, the structure tensor (i.e. the tensor product of the unit vector of the local fibre arrangement by itself) and a probability distribution of orientation. In this approach, assuming that the fibres do not interact with each other, the overall contribution of the collagen fibres to a given mechanical property of the tissue can be estimated by means of an averaging integral of the constitutive function describing the mechanical property at study over the set of all possible directions in space. Except for the particular case of fibre constitutive functions that are polynomial in the transversely isotropic invariants of the deformation, the averaging integral cannot be evaluated directly, in a single calculation because, in general, the integrand depends both on deformation and on fibre orientation in a non-separable way. The problem is thus, in a sense, analogous to that of solving the integral of a function of two variables, which cannot be split up into the product of two functions, each depending only on one of the variables. Although numerical schemes can be used to evaluate the integral at each deformation increment, this is computationally expensive. With the purpose of containing computational costs, this work proposes approximation methods that are based on the direct integrability of polynomial functions and that do not require the step-by-step evaluation of the averaging integrals. Three different methods are proposed: (a) a Taylor expansion of the fibre constitutive function in the transversely isotropic invariants of the deformation; (b) a Taylor expansion of the fibre constitutive function in the structure tensor; (c) for the case of a fibre constitutive function having a polynomial argument, an approximation in which the directional average of the constitutive function is replaced by the constitutive function evaluated at the directional average of the argument. Each of the proposed methods approximates the averaged constitutive function in such a way that it is multiplicatively decomposed into the product of a function of the deformation only and a function of the structure tensors only. In order to assess the accuracy of these methods, we evaluate the constitutive functions of the elastic potential and the Cauchy stress, for a biaxial test, under different conditions, i.e. different fibre distributions and different ratios of the nominal strains in the two directions. The results are then compared against those obtained for an averaging method available in the literature, as well as against the integration made at each increment of deformation.

  12. An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan, E-mail: weixuan.li@usc.edu; Lin, Guang, E-mail: guang.lin@pnnl.gov; Zhang, Dongxiao, E-mail: dxz@pku.edu.cn

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functionsmore » is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less

  13. An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LI, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly importantmore » for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less

  14. A critical analysis of the numerical and analytical methods used in the construction of the lunar gravity potential model.

    NASA Astrophysics Data System (ADS)

    Tuckness, D. G.; Jost, B.

    1995-08-01

    Current knowledge of the lunar gravity field is presented. The various methods used in determining these gravity fields are investigated and analyzed. It will be shown that weaknesses exist in the current models of the lunar gravity field. The dominant part of this weakness is caused by the lack of lunar tracking data information (farside, polar areas), which makes modeling the total lunar potential difficult. Comparisons of the various lunar models reveal an agreement in the low-order coefficients of the Legendre polynomials expansions. However, substantial differences in the models can exist in the higher-order harmonics. The main purpose of this study is to assess today's lunar gravity field models for use in tomorrow's lunar mission designs and operations.

  15. Cylinder surface test with Chebyshev polynomial fitting method

    NASA Astrophysics Data System (ADS)

    Yu, Kui-bang; Guo, Pei-ji; Chen, Xi

    2017-10-01

    Zernike polynomials fitting method is often applied in the test of optical components and systems, used to represent the wavefront and surface error in circular domain. Zernike polynomials are not orthogonal in rectangular region which results in its unsuitable for the test of optical element with rectangular aperture such as cylinder surface. Applying the Chebyshev polynomials which are orthogonal among the rectangular area as an substitution to the fitting method, can solve the problem. Corresponding to a cylinder surface with diameter of 50 mm and F number of 1/7, a measuring system has been designed in Zemax based on Fizeau Interferometry. The expressions of the two-dimensional Chebyshev polynomials has been given and its relationship with the aberration has been presented. Furthermore, Chebyshev polynomials are used as base items to analyze the rectangular aperture test data. The coefficient of different items are obtained from the test data through the method of least squares. Comparing the Chebyshev spectrum in different misalignment, it show that each misalignment is independence and has a certain relationship with the certain Chebyshev terms. The simulation results show that, through the Legendre polynomials fitting method, it will be a great improvement in the efficient of the detection and adjustment of the cylinder surface test.

  16. Numerical solutions for Helmholtz equations using Bernoulli polynomials

    NASA Astrophysics Data System (ADS)

    Bicer, Kubra Erdem; Yalcinbas, Salih

    2017-07-01

    This paper reports a new numerical method based on Bernoulli polynomials for the solution of Helmholtz equations. The method uses matrix forms of Bernoulli polynomials and their derivatives by means of collocation points. Aim of this paper is to solve Helmholtz equations using this matrix relations.

  17. Direct calculation of modal parameters from matrix orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    El-Kafafy, Mahmoud; Guillaume, Patrick

    2011-10-01

    The object of this paper is to introduce a new technique to derive the global modal parameter (i.e. system poles) directly from estimated matrix orthogonal polynomials. This contribution generalized the results given in Rolain et al. (1994) [5] and Rolain et al. (1995) [6] for scalar orthogonal polynomials to multivariable (matrix) orthogonal polynomials for multiple input multiple output (MIMO) system. Using orthogonal polynomials improves the numerical properties of the estimation process. However, the derivation of the modal parameters from the orthogonal polynomials is in general ill-conditioned if not handled properly. The transformation of the coefficients from orthogonal polynomials basis to power polynomials basis is known to be an ill-conditioned transformation. In this paper a new approach is proposed to compute the system poles directly from the multivariable orthogonal polynomials. High order models can be used without any numerical problems. The proposed method will be compared with existing methods (Van Der Auweraer and Leuridan (1987) [4] Chen and Xu (2003) [7]). For this comparative study, simulated as well as experimental data will be used.

  18. A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing

    2015-09-01

    The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.

  19. Comparative assessment of orthogonal polynomials for wavefront reconstruction over the square aperture.

    PubMed

    Ye, Jingfei; Gao, Zhishan; Wang, Shuai; Cheng, Jinlong; Wang, Wei; Sun, Wenqing

    2014-10-01

    Four orthogonal polynomials for reconstructing a wavefront over a square aperture based on the modal method are currently available, namely, the 2D Chebyshev polynomials, 2D Legendre polynomials, Zernike square polynomials and Numerical polynomials. They are all orthogonal over the full unit square domain. 2D Chebyshev polynomials are defined by the product of Chebyshev polynomials in x and y variables, as are 2D Legendre polynomials. Zernike square polynomials are derived by the Gram-Schmidt orthogonalization process, where the integration region across the full unit square is circumscribed outside the unit circle. Numerical polynomials are obtained by numerical calculation. The presented study is to compare these four orthogonal polynomials by theoretical analysis and numerical experiments from the aspects of reconstruction accuracy, remaining errors, and robustness. Results show that the Numerical orthogonal polynomial is superior to the other three polynomials because of its high accuracy and robustness even in the case of a wavefront with incomplete data.

  20. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    NASA Astrophysics Data System (ADS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  1. A Novel Approach to Solve Linearized Stellar Pulsation Equations

    NASA Astrophysics Data System (ADS)

    Bard, Christopher; Teitler, S.

    2011-01-01

    We present a new approach to modeling linearized, non-radial pulsations in differentially rotating, massive stars. As a first step in this direction, we consider adiabatic pulsations and adopt the Cowling approximation that perturbations of the gravitational potential and its radial derivative are negligible. The angular dependence of the pulsation modes is expressed as a series expansion of associated Legendre polynomials; the resulting coupled system of differential equations is then solved by finding the eigenfrequencies at which the determinant of a characteristic matrix vanishes. Our method improves on previous treatments by removing the requirement that an arbitrary normalization be applied to the eigenfunctions; this brings the benefit of improved numerical robustness.

  2. The Orbital precession around oblate spheroids

    NASA Astrophysics Data System (ADS)

    Montanus, J. M. C.

    2006-07-01

    An exact series will be given for the gravitational potential generated by an oblate gravitating source. To this end the corresponding Epstein-Hubbell type elliptic integral is evaluated. The procedure is based on the Legendre polynomial expansion method and on combinatorial techniques. The result is of interest for gravitational models based on the linearity of the gravitational potential. The series approximation for such potentials is of use for the analysis of orbital motions around a nonspherical source. It can be considered advantageous that the analysis is purely algebraic. Numerical approximations are not required. As an important example, the expression for the orbital precession will be derived for an object orbiting around an oblate homogeneous spheroid.

  3. On computing the geoelastic response to a disk load

    NASA Astrophysics Data System (ADS)

    Bevis, M.; Melini, D.; Spada, G.

    2016-06-01

    We review the theory of the Earth's elastic and gravitational response to a surface disk load. The solutions for displacement of the surface and the geoid are developed using expansions of Legendre polynomials, their derivatives and the load Love numbers. We provide a MATLAB function called diskload that computes the solutions for both uncompensated and compensated disk loads. In order to numerically implement the Legendre expansions, it is necessary to choose a harmonic degree, nmax, at which to truncate the series used to construct the solutions. We present a rule of thumb (ROT) for choosing an appropriate value of nmax, describe the consequences of truncating the expansions prematurely and provide a means to judiciously violate the ROT when that becomes a practical necessity.

  4. Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach

    NASA Astrophysics Data System (ADS)

    Chowdhury, R.; Adhikari, S.

    2012-10-01

    Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.

  5. Quasi-kernel polynomials and convergence results for quasi-minimal residual iterations

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1992-01-01

    Recently, Freund and Nachtigal have proposed a novel polynominal-based iteration, the quasi-minimal residual algorithm (QMR), for solving general nonsingular non-Hermitian linear systems. Motivated by the QMR method, we have introduced the general concept of quasi-kernel polynomials, and we have shown that the QMR algorithm is based on a particular instance of quasi-kernel polynomials. In this paper, we continue our study of quasi-kernel polynomials. In particular, we derive bounds for the norms of quasi-kernel polynomials. These results are then applied to obtain convergence theorems both for the QMR method and for a transpose-free variant of QMR, the TFQMR algorithm.

  6. Solution of Einsteins Equation for Deformation of a Magnetized Neutron Star

    NASA Astrophysics Data System (ADS)

    Rizaldy, R.; Sulaksono, A.

    2018-04-01

    We studied the effect of very large and non-uniform magnetic field existed in the neutron star on the deformation of the neutron star. We used in our analytical calculation, multipole expansion of the tensor metric and the momentum-energy tensor in Legendre polynomial expansion up to the quadrupole order. In this way we obtain the solutions of Einstein’s equation with the correction factors due to the magnetic field are taken into account. We obtain from our numerical calculation that the degree of deformation (ellipticity) is increased when the the mass is decreased.

  7. The space-time solution element method: A new numerical approach for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Chang, Sin-Chung

    1995-01-01

    This paper is one of a series of papers describing the development of a new numerical method for the Navier-Stokes equations. Unlike conventional numerical methods, the current method concentrates on the discrete simulation of both the integral and differential forms of the Navier-Stokes equations. Conservation of mass, momentum, and energy in space-time is explicitly provided for through a rigorous enforcement of both the integral and differential forms of the governing conservation laws. Using local polynomial expansions to represent the discrete primitive variables on each cell, fluxes at cell interfaces are evaluated and balanced using exact functional expressions. No interpolation or flux limiters are required. Because of the generality of the current method, it applies equally to the steady and unsteady Navier-Stokes equations. In this paper, we generalize and extend the authors' 2-D, steady state implicit scheme. A general closure methodology is presented so that all terms up through a given order in the local expansions may be retained. The scheme is also extended to nonorthogonal Cartesian grids. Numerous flow fields are computed and results are compared with known solutions. The high accuracy of the scheme is demonstrated through its ability to accurately resolve developing boundary layers on coarse grids. Finally, we discuss applications of the current method to the unsteady Navier-Stokes equations.

  8. Equivalences of the multi-indexed orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odake, Satoru

    2014-01-15

    Multi-indexed orthogonal polynomials describe eigenfunctions of exactly solvable shape-invariant quantum mechanical systems in one dimension obtained by the method of virtual states deletion. Multi-indexed orthogonal polynomials are labeled by a set of degrees of polynomial parts of virtual state wavefunctions. For multi-indexed orthogonal polynomials of Laguerre, Jacobi, Wilson, and Askey-Wilson types, two different index sets may give equivalent multi-indexed orthogonal polynomials. We clarify these equivalences. Multi-indexed orthogonal polynomials with both type I and II indices are proportional to those of type I indices only (or type II indices only) with shifted parameters.

  9. Assessment of Hybrid High-Order methods on curved meshes and comparison with discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Botti, Lorenzo; Di Pietro, Daniele A.

    2018-10-01

    We propose and validate a novel extension of Hybrid High-Order (HHO) methods to meshes featuring curved elements. HHO methods are based on discrete unknowns that are broken polynomials on the mesh and its skeleton. We propose here the use of physical frame polynomials over mesh elements and reference frame polynomials over mesh faces. With this choice, the degree of face unknowns must be suitably selected in order to recover on curved meshes the same convergence rates as on straight meshes. We provide an estimate of the optimal face polynomial degree depending on the element polynomial degree and on the so-called effective mapping order. The estimate is numerically validated through specifically crafted numerical tests. All test cases are conducted considering two- and three-dimensional pure diffusion problems, and include comparisons with discontinuous Galerkin discretizations. The extension to agglomerated meshes with curved boundaries is also considered.

  10. Combinatorics of γ-structures.

    PubMed

    Han, Hillary S W; Li, Thomas J X; Reidys, Christian M

    2014-08-01

    In this article we study canonical γ-structures, a class of RNA pseudoknot structures that plays a key role in the context of polynomial time folding of RNA pseudoknot structures. A γ-structure is composed of specific building blocks that have topological genus less than or equal to γ, where composition means concatenation and nesting of such blocks. Our main result is the derivation of the generating function of γ-structures via symbolic enumeration using so called irreducible shadows. We furthermore recursively compute the generating polynomials of irreducible shadows of genus ≤ γ. The γ-structures are constructed via γ-matchings. For 1 ≤ γ ≤ 10, we compute Puiseux expansions at the unique, dominant singularities, allowing us to derive simple asymptotic formulas for the number of γ-structures.

  11. Phase demodulation method from a single fringe pattern based on correlation with a polynomial form.

    PubMed

    Robin, Eric; Valle, Valéry; Brémand, Fabrice

    2005-12-01

    The method presented extracts the demodulated phase from only one fringe pattern. Locally, this method approaches the fringe pattern morphology with the help of a mathematical model. The degree of similarity between the mathematical model and the real fringe is estimated by minimizing a correlation function. To use an optimization process, we have chosen a polynomial form such as a mathematical model. However, the use of a polynomial form induces an identification procedure with the purpose of retrieving the demodulated phase. This method, polynomial modulated phase correlation, is tested on several examples. Its performance, in terms of speed and precision, is presented on very noised fringe patterns.

  12. Orthonormal vector general polynomials derived from the Cartesian gradient of the orthonormal Zernike-based polynomials.

    PubMed

    Mafusire, Cosmas; Krüger, Tjaart P J

    2018-06-01

    The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.

  13. Massively parallel sparse matrix function calculations with NTPoly

    NASA Astrophysics Data System (ADS)

    Dawson, William; Nakajima, Takahito

    2018-04-01

    We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.

  14. Algorithms for Solvents and Spectral Factors of Matrix Polynomials

    DTIC Science & Technology

    1981-01-01

    spectral factors of matrix polynomials LEANG S. SHIEHt, YIH T. TSAYt and NORMAN P. COLEMANt A generalized Newton method , based on the contracted gradient...of a matrix poly- nomial, is derived for solving the right (left) solvents and spectral factors of matrix polynomials. Two methods of selecting initial...estimates for rapid convergence of the newly developed numerical method are proposed. Also, new algorithms for solving complete sets of the right

  15. Numerical solution of the quantum Lenard-Balescu equation for a non-degenerate one-component plasma

    DOE PAGES

    Scullard, Christian R.; Belt, Andrew P.; Fennell, Susan C.; ...

    2016-09-01

    We present a numerical solution of the quantum Lenard-Balescu equation using a spectral method, namely an expansion in Laguerre polynomials. This method exactly conserves both particles and kinetic energy and facilitates the integration over the dielectric function. To demonstrate the method, we solve the equilibration problem for a spatially homogeneous one-component plasma with various initial conditions. Unlike the more usual Landau/Fokker-Planck system, this method requires no input Coulomb logarithm; the logarithmic terms in the collision integral arise naturally from the equation along with the non-logarithmic order-unity terms. The spectral method can also be used to solve the Landau equation andmore » a quantum version of the Landau equation in which the integration over the wavenumber requires only a lower cutoff. We solve these problems as well and compare them with the full Lenard-Balescu solution in the weak-coupling limit. Finally, we discuss the possible generalization of this method to include spatial inhomogeneity and velocity anisotropy.« less

  16. Constructing Surrogate Models of Complex Systems with Enhanced Sparsity: Quantifying the Influence of Conformational Uncertainty in Biomolecular Solvation

    DOE PAGES

    Lei, Huan; Yang, Xiu; Zheng, Bin; ...

    2015-11-05

    Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less

  17. A Generalized Polynomial Chaos-Based Approach to Analyze the Impacts of Process Deviations on MEMS Beams.

    PubMed

    Gao, Lili; Zhou, Zai-Fa; Huang, Qing-An

    2017-11-08

    A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.

  18. A Generalized Polynomial Chaos-Based Approach to Analyze the Impacts of Process Deviations on MEMS Beams

    PubMed Central

    Gao, Lili

    2017-01-01

    A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations. PMID:29117096

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choun, Yoon Seok, E-mail: ychoun@gmail.com

    The Heun function generalizes all well-known special functions such as Spheroidal Wave, Lame, Mathieu, and hypergeometric {sub 2}F{sub 1}, {sub 1}F{sub 1} and {sub 0}F{sub 1} functions. Heun functions are applicable to diverse areas such as theory of black holes, lattice systems in statistical mechanics, solution of the Schrödinger equation of quantum mechanics, and addition of three quantum spins. In this paper I will apply three term recurrence formula (Y.S. Choun, (arXiv:1303.0806), 2013) to the power series expansion in closed forms of Heun function (infinite series and polynomial) including all higher terms of A{sub n}’s. Section 3 contains my analysismore » on applying the power series expansions of Heun function to a recent paper (R.S. Maier, Math. Comp. 33 (2007) 811–843). Due to space restriction final equations for the 192 Heun functions are not included in the paper, but feel free to contact me for the final solutions. Section 4 contains two additional examples using the power series expansions of Heun function. This paper is 3rd out of 10 in series “Special functions and three term recurrence formula (3TRF)”. See Section 5 for all the papers in the series. The previous paper in series deals with three term recurrence formula (3TRF). The next paper in the series describes the integral forms of Heun function and its asymptotic behaviors analytically. -- Highlights: •Power series expansion for infinite series of Heun function using 3 term rec. form. •Power series for polynomial which makes B{sub n} term terminated of Heun function. •Applicable to areas such as the Teukolsky equation in Kerr–Newman–de Sitter geometries.« less

  20. On polynomial preconditioning for indefinite Hermitian matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1989-01-01

    The minimal residual method is studied combined with polynomial preconditioning for solving large linear systems (Ax = b) with indefinite Hermitian coefficient matrices (A). The standard approach for choosing the polynomial preconditioners leads to preconditioned systems which are positive definite. Here, a different strategy is studied which leaves the preconditioned coefficient matrix indefinite. More precisely, the polynomial preconditioner is designed to cluster the positive, resp. negative eigenvalues of A around 1, resp. around some negative constant. In particular, it is shown that such indefinite polynomial preconditioners can be obtained as the optimal solutions of a certain two parameter family of Chebyshev approximation problems. Some basic results are established for these approximation problems and a Remez type algorithm is sketched for their numerical solution. The problem of selecting the parameters such that the resulting indefinite polynomial preconditioners speeds up the convergence of minimal residual method optimally is also addressed. An approach is proposed based on the concept of asymptotic convergence factors. Finally, some numerical examples of indefinite polynomial preconditioners are given.

  1. Methods of Optimizing X-Ray Optical Prescriptions for Wide-Field Applications

    NASA Technical Reports Server (NTRS)

    Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.; Weisskopf, M. C.

    2010-01-01

    We are working on the development of a method for optimizing wide-field x-ray telescope mirror prescriptions, including polynomial coefficients, mirror shell relative displacements, and (assuming 4 focal plane detectors) detector placement and tilt that does not require a search through the multi-dimensional parameter space. Under the assumption that the parameters are small enough that second order expansions are valid, we show that the performance at the detector surface can be expressed as a quadratic function of the parameters with numerical coefficients derived from a ray trace through the underlying Wolter I optic. The best values for the parameters are found by solving the linear system of equations creating by setting derivatives of this function with respect to each parameter to zero. We describe the present status of this development effort.

  2. The influence of random element displacement on DOA estimates obtained with (Khatri-Rao-)root-MUSIC.

    PubMed

    Inghelbrecht, Veronique; Verhaevert, Jo; van Hecke, Tanja; Rogier, Hendrik

    2014-11-11

    Although a wide range of direction of arrival (DOA) estimation algorithms has been described for a diverse range of array configurations, no specific stochastic analysis framework has been established to assess the probability density function of the error on DOA estimates due to random errors in the array geometry. Therefore, we propose a stochastic collocation method that relies on a generalized polynomial chaos expansion to connect the statistical distribution of random position errors to the resulting distribution of the DOA estimates. We apply this technique to the conventional root-MUSIC and the Khatri-Rao-root-MUSIC methods. According to Monte-Carlo simulations, this novel approach yields a speedup by a factor of more than 100 in terms of CPU-time for a one-dimensional case and by a factor of 56 for a two-dimensional case.

  3. Alternatives to the stochastic "noise vector" approach

    NASA Astrophysics Data System (ADS)

    de Forcrand, Philippe; Jäger, Benjamin

    2018-03-01

    Several important observables, like the quark condensate and the Taylor coefficients of the expansion of the QCD pressure with respect to the chemical potential, are based on the trace of the inverse Dirac operator and of its powers. Such traces are traditionally estimated with "noise vectors" sandwiching the operator. We explore alternative approaches based on polynomial approximations of the inverse Dirac operator.

  4. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Schöbi, Roland; Sudret, Bruno

    2017-06-01

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions to surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.

  5. A polynomial chaos expansion based molecular dynamics study for probabilistic strength analysis of nano-twinned copper

    NASA Astrophysics Data System (ADS)

    Mahata, Avik; Mukhopadhyay, Tanmoy; Adhikari, Sondipon

    2016-03-01

    Nano-twinned structures are mechanically stronger, ductile and stable than its non-twinned form. We have investigated the effect of varying twin spacing and twin boundary width (TBW) on the yield strength of the nano-twinned copper in a probabilistic framework. An efficient surrogate modelling approach based on polynomial chaos expansion has been proposed for the analysis. Effectively utilising 15 sets of expensive molecular dynamics simulations, thousands of outputs have been obtained corresponding to different sets of twin spacing and twin width using virtual experiments based on the surrogates. One of the major outcomes of this work is that there exists an optimal combination of twin boundary spacing and twin width until which the strength can be increased and after that critical point the nanowires weaken. This study also reveals that the yield strength of nano-twinned copper is more sensitive to TBW than twin spacing. Such robust inferences have been possible to be drawn only because of applying the surrogate modelling approach, which makes it feasible to obtain results corresponding to 40 000 combinations of different twin boundary spacing and twin width in a computationally efficient framework.

  6. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schöbi, Roland, E-mail: schoebi@ibk.baug.ethz.ch; Sudret, Bruno, E-mail: sudret@ibk.baug.ethz.ch

    2017-06-15

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions tomore » surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.« less

  7. High-order regularization in lattice-Boltzmann equations

    NASA Astrophysics Data System (ADS)

    Mattila, Keijo K.; Philippi, Paulo C.; Hegele, Luiz A.

    2017-04-01

    A lattice-Boltzmann equation (LBE) is the discrete counterpart of a continuous kinetic model. It can be derived using a Hermite polynomial expansion for the velocity distribution function. Since LBEs are characterized by discrete, finite representations of the microscopic velocity space, the expansion must be truncated and the appropriate order of truncation depends on the hydrodynamic problem under investigation. Here we consider a particular truncation where the non-equilibrium distribution is expanded on a par with the equilibrium distribution, except that the diffusive parts of high-order non-equilibrium moments are filtered, i.e., only the corresponding advective parts are retained after a given rank. The decomposition of moments into diffusive and advective parts is based directly on analytical relations between Hermite polynomial tensors. The resulting, refined regularization procedure leads to recurrence relations where high-order non-equilibrium moments are expressed in terms of low-order ones. The procedure is appealing in the sense that stability can be enhanced without local variation of transport parameters, like viscosity, or without tuning the simulation parameters based on embedded optimization steps. The improved stability properties are here demonstrated using the perturbed double periodic shear layer flow and the Sod shock tube problem as benchmark cases.

  8. Stochastic Estimation via Polynomial Chaos

    DTIC Science & Technology

    2015-10-01

    AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic

  9. Vehicle Sprung Mass Estimation for Rough Terrain

    DTIC Science & Technology

    2011-03-01

    distributions are greater than zero. The multivariate polynomials are functions of the Legendre polynomials (Poularikas (1999...developed methods based on polynomial chaos theory and on the maximum likelihood approach to estimate the most likely value of the vehicle sprung...mass. The polynomial chaos estimator is compared to benchmark algorithms including recursive least squares, recursive total least squares, extended

  10. Volumetric calibration of a plenoptic camera.

    PubMed

    Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S

    2018-02-01

    The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.

  11. Long-time uncertainty propagation using generalized polynomial chaos and flow map composition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luchtenburg, Dirk M., E-mail: dluchten@cooper.edu; Brunton, Steven L.; Rowley, Clarence W.

    2014-10-01

    We present an efficient and accurate method for long-time uncertainty propagation in dynamical systems. Uncertain initial conditions and parameters are both addressed. The method approximates the intermediate short-time flow maps by spectral polynomial bases, as in the generalized polynomial chaos (gPC) method, and uses flow map composition to construct the long-time flow map. In contrast to the gPC method, this approach has spectral error convergence for both short and long integration times. The short-time flow map is characterized by small stretching and folding of the associated trajectories and hence can be well represented by a relatively low-degree basis. The compositionmore » of these low-degree polynomial bases then accurately describes the uncertainty behavior for long integration times. The key to the method is that the degree of the resulting polynomial approximation increases exponentially in the number of time intervals, while the number of polynomial coefficients either remains constant (for an autonomous system) or increases linearly in the number of time intervals (for a non-autonomous system). The findings are illustrated on several numerical examples including a nonlinear ordinary differential equation (ODE) with an uncertain initial condition, a linear ODE with an uncertain model parameter, and a two-dimensional, non-autonomous double gyre flow.« less

  12. Two-dimensional orthonormal trend surfaces for prospecting

    NASA Astrophysics Data System (ADS)

    Sarma, D. D.; Selvaraj, J. B.

    Orthonormal polynomials have distinct advantages over conventional polynomials: the equations for evaluating trend coefficients are not ill-conditioned and the convergence power of this method is greater compared to the least-squares approximation and therefore the approach by orthonormal functions provides a powerful alternative to the least-squares method. In this paper, orthonormal polynomials in two dimensions are obtained using the Gram-Schmidt method for a polynomial series of the type: Z = 1 + x + y + x2 + xy + y2 + … + yn, where x and y are the locational coordinates and Z is the value of the variable under consideration. Trend-surface analysis, which has wide applications in prospecting, has been carried out using the orthonormal polynomial approach for two sample sets of data from India concerned with gold accumulation from the Kolar Gold Field, and gravity data. A comparison of the orthonormal polynomial trend surfaces with those obtained by the classical least-squares method has been made for the two data sets. In both the situations, the orthonormal polynomial surfaces gave an improved fit to the data. A flowchart and a FORTRAN-IV computer program for deriving orthonormal polynomials of any order and for using them to fit trend surfaces is included. The program has provision for logarithmic transformation of the Z variable. If log-transformation is performed the predicted Z values are reconverted to the original units and the trend-surface map generated for use. The illustration of gold assay data related to the Champion lode system of Kolar Gold Fields, for which a 9th-degree orthonormal trend surface was fit, could be used for further prospecting the area.

  13. Analytical Solutions for the Resonance Response of Goupillaud-type Elastic Media Using Z-transform Methods

    DTIC Science & Technology

    2012-02-01

    using z-transform methods. The determinant of the resulting global system matrix in the z-space |Am| is a palindromic polynomial with real...resulting global system matrix in the z-space |Am| is a palindromic polynomial with real coefficients. The zeros of the palindromic polynomial are distinct...Goupillaud-type multilayered media. In addition, the present treatment uses a global matrix method that is attributed to Knopoff [16], rather than the

  14. Ion velocity distribution functions in argon and helium discharges: detailed comparison of numerical simulation results and experimental data

    NASA Astrophysics Data System (ADS)

    Wang, Huihui; Sukhomlinov, Vladimir S.; Kaganovich, Igor D.; Mustafaev, Alexander S.

    2017-02-01

    Using the Monte Carlo collision method, we have performed simulations of ion velocity distribution functions (IVDF) taking into account both elastic collisions and charge exchange collisions of ions with atoms in uniform electric fields for argon and helium background gases. The simulation results are verified by comparison with the experiment data of the ion mobilities and the ion transverse diffusion coefficients in argon and helium. The recently published experimental data for the first seven coefficients of the Legendre polynomial expansion of the ion energy and angular distribution functions are used to validate simulation results for IVDF. Good agreement between measured and simulated IVDFs shows that the developed simulation model can be used for accurate calculations of IVDFs.

  15. Recursive computation of mutual potential between two polyhedra

    NASA Astrophysics Data System (ADS)

    Hirabayashi, Masatoshi; Scheeres, Daniel J.

    2013-11-01

    Recursive computation of mutual potential, force, and torque between two polyhedra is studied. Based on formulations by Werner and Scheeres (Celest Mech Dyn Astron 91:337-349, 2005) and Fahnestock and Scheeres (Celest Mech Dyn Astron 96:317-339, 2006) who applied the Legendre polynomial expansion to gravity interactions and expressed each order term by a shape-dependent part and a shape-independent part, this paper generalizes the computation of each order term, giving recursive relations of the shape-dependent part. To consider the potential, force, and torque, we introduce three tensors. This method is applicable to any multi-body systems. Finally, we implement this recursive computation to simulate the dynamics of a two rigid-body system that consists of two equal-sized parallelepipeds.

  16. Elevation data fitting and precision analysis of Google Earth in road survey

    NASA Astrophysics Data System (ADS)

    Wei, Haibin; Luan, Xiaohan; Li, Hanchao; Jia, Jiangkun; Chen, Zhao; Han, Leilei

    2018-05-01

    Objective: In order to improve efficiency of road survey and save manpower and material resources, this paper intends to apply Google Earth to the feasibility study stage of road survey and design. Limited by the problem that Google Earth elevation data lacks precision, this paper is focused on finding several different fitting or difference methods to improve the data precision, in order to make every effort to meet the accuracy requirements of road survey and design specifications. Method: On the basis of elevation difference of limited public points, any elevation difference of the other points can be fitted or interpolated. Thus, the precise elevation can be obtained by subtracting elevation difference from the Google Earth data. Quadratic polynomial surface fitting method, cubic polynomial surface fitting method, V4 interpolation method in MATLAB and neural network method are used in this paper to process elevation data of Google Earth. And internal conformity, external conformity and cross correlation coefficient are used as evaluation indexes to evaluate the data processing effect. Results: There is no fitting difference at the fitting point while using V4 interpolation method. Its external conformity is the largest and the effect of accuracy improvement is the worst, so V4 interpolation method is ruled out. The internal and external conformity of the cubic polynomial surface fitting method both are better than those of the quadratic polynomial surface fitting method. The neural network method has a similar fitting effect with the cubic polynomial surface fitting method, but its fitting effect is better in the case of a higher elevation difference. Because the neural network method is an unmanageable fitting model, the cubic polynomial surface fitting method should be mainly used and the neural network method can be used as the auxiliary method in the case of higher elevation difference. Conclusions: Cubic polynomial surface fitting method can obviously improve data precision of Google Earth. The error of data in hilly terrain areas meets the requirement of specifications after precision improvement and it can be used in feasibility study stage of road survey and design.

  17. Representing Lumped Markov Chains by Minimal Polynomials over Field GF(q)

    NASA Astrophysics Data System (ADS)

    Zakharov, V. M.; Shalagin, S. V.; Eminov, B. F.

    2018-05-01

    A method has been proposed to represent lumped Markov chains by minimal polynomials over a finite field. The accuracy of representing lumped stochastic matrices, the law of lumped Markov chains depends linearly on the minimum degree of polynomials over field GF(q). The method allows constructing the realizations of lumped Markov chains on linear shift registers with a pre-defined “linear complexity”.

  18. Moving-Boundary Problems Associated with Lyopreservation

    NASA Astrophysics Data System (ADS)

    Gruber, Christopher Andrew

    The work presented in this Dissertation is motivated by research into the preservation of biological specimens by way of vitrification, a technique known as lyopreservation. The operative principle behind lyopreservation is that a glassy material forms as a solution of sugar and water is desiccated. The microstructure of this glass impedes transport within the material, thereby slowing metabolism and effectively halting the aging processes in a biospecimen. This Dissertation is divided into two segments. The first concerns the nature of diffusive transport within a glassy state. Experimental studies suggest that diffusion within a glass is anomalously slow. Scaled Brownian motion (SBM) is proposed as a mathematical model which captures the qualitative features of anomalously slow diffusion while minimizing computational expense. This model is applied to several moving-boundary problems and the results are compared to a more well-established model, fractional anomalous diffusion (FAD). The virtues of SBM are based on the model's relative mathematical simplicity: the governing equation under FAD dynamics involves a fractional derivative operator, which precludes the use of analytical methods in almost all circumstances and also entails great computational expense. In some geometries, SBM allows similarity solutions, though computational methods are generally required. The use of SBM as an approximation to FAD when a system is "nearly classical'' is also explored. The second portion of this Dissertation concerns spin-drying, which is an experimental approach to biopreservation in a laboratory setting. A biospecimen is adhered to a glass wafer and this substrate is covered with sugar solution and rapidly spun on a turntable while water is evaporated from the film surface. The mathematical model for the spin-drying process includes diffusion, viscous fluid flow, and evaporation, among other contributions to the dynamics. Lubrication theory is applied to the model and an expansion in orthogonal polynomials is applied. The resulting system of equations is solved computationally. The influence of various experimental parameters upon the system dynamics is investigated, particularly the role of the spin rate. A convergence study of the solution verifies that the polynomial expansion method yields accurate results.

  19. The Use of Generalized Laguerre Polynomials in Spectral Methods for Solving Fractional Delay Differential Equations.

    PubMed

    Khader, M M

    2013-10-01

    In this paper, an efficient numerical method for solving the fractional delay differential equations (FDDEs) is considered. The fractional derivative is described in the Caputo sense. The proposed method is based on the derived approximate formula of the Laguerre polynomials. The properties of Laguerre polynomials are utilized to reduce FDDEs to a linear or nonlinear system of algebraic equations. Special attention is given to study the error and the convergence analysis of the proposed method. Several numerical examples are provided to confirm that the proposed method is in excellent agreement with the exact solution.

  20. HOMFLY for twist knots and exclusive Racah matrices in representation [333

    NASA Astrophysics Data System (ADS)

    Morozov, A.

    2018-03-01

    Next step is reported in the program of Racah matrices extraction from the differential expansion of HOMFLY polynomials for twist knots: from the double-column rectangular representations R = [ rr ] to a triple-column and triple-hook R = [ 333 ]. The main new phenomenon is the deviation of the particular coefficient f[ 332 ][ 21 ] from the corresponding skew dimension, what opens a way to further generalizations.

  1. Volumetric calibration of a plenoptic camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  2. Volumetric calibration of a plenoptic camera

    DOE PAGES

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...

    2018-02-01

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  3. Improving multivariate Horner schemes with Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.

    2013-11-01

    Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.

  4. Assessing the suitability of fractional polynomial methods in health services research: a perspective on the categorization epidemic.

    PubMed

    Williams, Jennifer Stewart

    2011-07-01

    To show how fractional polynomial methods can usefully replace the practice of arbitrarily categorizing data in epidemiology and health services research. A health service setting is used to illustrate a structured and transparent way of representing non-linear data without arbitrary grouping. When age is a regressor its effects on an outcome will be interpreted differently depending upon the placing of cutpoints or the use of a polynomial transformation. Although it is common practice, categorization comes at a cost. Information is lost, and accuracy and statistical power reduced, leading to spurious statistical interpretation of the data. The fractional polynomial method is widely supported by statistical software programs, and deserves greater attention and use.

  5. Perceptually informed synthesis of bandlimited classical waveforms using integrated polynomial interpolation.

    PubMed

    Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan

    2012-01-01

    Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.

  6. On Polynomial Solutions of Linear Differential Equations with Polynomial Coefficients

    ERIC Educational Resources Information Center

    Si, Do Tan

    1977-01-01

    Demonstrates a method for solving linear differential equations with polynomial coefficients based on the fact that the operators z and D + d/dz are known to be Hermitian conjugates with respect to the Bargman and Louck-Galbraith scalar products. (MLH)

  7. On the Numerical Formulation of Parametric Linear Fractional Transformation (LFT) Uncertainty Models for Multivariate Matrix Polynomial Problems

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.

    1998-01-01

    Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.

  8. Equivalent-circuit models for electret-based vibration energy harvesters

    NASA Astrophysics Data System (ADS)

    Phu Le, Cuong; Halvorsen, Einar

    2017-08-01

    This paper presents a complete analysis to build a tool for modelling electret-based vibration energy harvesters. The calculational approach includes all possible effects of fringing fields that may have significant impact on output power. The transducer configuration consists of two sets of metal strip electrodes on a top substrate that faces electret strips deposited on a bottom movable substrate functioning as a proof mass. Charge distribution on each metal strip is expressed by series expansion using Chebyshev polynomials multiplied by a reciprocal square-root form. The Galerkin method is then applied to extract all charge induction coefficients. The approach is validated by finite element calculations. From the analytic tool, a variety of connection schemes for power extraction in slot-effect and cross-wafer configurations can be lumped to a standard equivalent circuit with inclusion of parasitic capacitance. Fast calculation of the coefficients is also obtained by a proposed closed-form solution based on leading terms of the series expansions. The achieved analytical result is an important step for further optimisation of the transducer geometry and maximising harvester performance.

  9. Stochastic dynamic analysis of marine risers considering Gaussian system uncertainties

    NASA Astrophysics Data System (ADS)

    Ni, Pinghe; Li, Jun; Hao, Hong; Xia, Yong

    2018-03-01

    This paper performs the stochastic dynamic response analysis of marine risers with material uncertainties, i.e. in the mass density and elastic modulus, by using Stochastic Finite Element Method (SFEM) and model reduction technique. These uncertainties are assumed having Gaussian distributions. The random mass density and elastic modulus are represented by using the Karhunen-Loève (KL) expansion. The Polynomial Chaos (PC) expansion is adopted to represent the vibration response because the covariance of the output is unknown. Model reduction based on the Iterated Improved Reduced System (IIRS) technique is applied to eliminate the PC coefficients of the slave degrees of freedom to reduce the dimension of the stochastic system. Monte Carlo Simulation (MCS) is conducted to obtain the reference response statistics. Two numerical examples are studied in this paper. The response statistics from the proposed approach are compared with those from MCS. It is noted that the computational time is significantly reduced while the accuracy is kept. The results demonstrate the efficiency of the proposed approach for stochastic dynamic response analysis of marine risers.

  10. Generalised Transfer Functions of Neural Networks

    NASA Astrophysics Data System (ADS)

    Fung, C. F.; Billings, S. A.; Zhang, H.

    1997-11-01

    When artificial neural networks are used to model non-linear dynamical systems, the system structure which can be extremely useful for analysis and design, is buried within the network architecture. In this paper, explicit expressions for the frequency response or generalised transfer functions of both feedforward and recurrent neural networks are derived in terms of the network weights. The derivation of the algorithm is established on the basis of the Taylor series expansion of the activation functions used in a particular neural network. This leads to a representation which is equivalent to the non-linear recursive polynomial model and enables the derivation of the transfer functions to be based on the harmonic expansion method. By mapping the neural network into the frequency domain information about the structure of the underlying non-linear system can be recovered. Numerical examples are included to demonstrate the application of the new algorithm. These examples show that the frequency response functions appear to be highly sensitive to the network topology and training, and that the time domain properties fail to reveal deficiencies in the trained network structure.

  11. Retrieving the optical parameters of biological tissues using diffuse reflectance spectroscopy and Fourier series expansions. I. theory and application.

    PubMed

    Muñoz Morales, Aarón A; Vázquez Y Montiel, Sergio

    2012-10-01

    The determination of optical parameters of biological tissues is essential for the application of optical techniques in the diagnosis and treatment of diseases. Diffuse Reflection Spectroscopy is a widely used technique to analyze the optical characteristics of biological tissues. In this paper we show that by using diffuse reflectance spectra and a new mathematical model we can retrieve the optical parameters by applying an adjustment of the data with nonlinear least squares. In our model we represent the spectra using a Fourier series expansion finding mathematical relations between the polynomial coefficients and the optical parameters. In this first paper we use spectra generated by the Monte Carlo Multilayered Technique to simulate the propagation of photons in turbid media. Using these spectra we determine the behavior of Fourier series coefficients when varying the optical parameters of the medium under study. With this procedure we find mathematical relations between Fourier series coefficients and optical parameters. Finally, the results show that our method can retrieve the optical parameters of biological tissues with accuracy that is adequate for medical applications.

  12. Non-stationary component extraction in noisy multicomponent signal using polynomial chirping Fourier transform.

    PubMed

    Lu, Wenlong; Xie, Junwei; Wang, Heming; Sheng, Chuan

    2016-01-01

    Inspired by track-before-detection technology in radar, a novel time-frequency transform, namely polynomial chirping Fourier transform (PCFT), is exploited to extract components from noisy multicomponent signal. The PCFT combines advantages of Fourier transform and polynomial chirplet transform to accumulate component energy along a polynomial chirping curve in the time-frequency plane. The particle swarm optimization algorithm is employed to search optimal polynomial parameters with which the PCFT will achieve a most concentrated energy ridge in the time-frequency plane for the target component. The component can be well separated in the polynomial chirping Fourier domain with a narrow-band filter and then reconstructed by inverse PCFT. Furthermore, an iterative procedure, involving parameter estimation, PCFT, filtering and recovery, is introduced to extract components from a noisy multicomponent signal successively. The Simulations and experiments show that the proposed method has better performance in component extraction from noisy multicomponent signal as well as provides more time-frequency details about the analyzed signal than conventional methods.

  13. Comparative Analysis of Various Single-tone Frequency Estimation Techniques in High-order Instantaneous Moments Based Phase Estimation Method

    NASA Astrophysics Data System (ADS)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2010-04-01

    For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.

  14. Estimation of Phase in Fringe Projection Technique Using High-order Instantaneous Moments Based Method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, G.; Rastogi, Pramod

    2010-04-01

    For three-dimensional (3D) shape measurement using fringe projection techniques, the information about the 3D shape of an object is encoded in the phase of a recorded fringe pattern. The paper proposes a high-order instantaneous moments based method to estimate phase from a single fringe pattern in fringe projection. The proposed method works by approximating the phase as a piece-wise polynomial and subsequently determining the polynomial coefficients using high-order instantaneous moments to construct the polynomial phase. Simulation results are presented to show the method's potential.

  15. Stabilisation of discrete-time polynomial fuzzy systems via a polynomial lyapunov approach

    NASA Astrophysics Data System (ADS)

    Nasiri, Alireza; Nguang, Sing Kiong; Swain, Akshya; Almakhles, Dhafer

    2018-02-01

    This paper deals with the problem of designing a controller for a class of discrete-time nonlinear systems which is represented by discrete-time polynomial fuzzy model. Most of the existing control design methods for discrete-time fuzzy polynomial systems cannot guarantee their Lyapunov function to be a radially unbounded polynomial function, hence the global stability cannot be assured. The proposed control design in this paper guarantees a radially unbounded polynomial Lyapunov functions which ensures global stability. In the proposed design, state feedback structure is considered and non-convexity problem is solved by incorporating an integrator into the controller. Sufficient conditions of stability are derived in terms of polynomial matrix inequalities which are solved via SOSTOOLS in MATLAB. A numerical example is presented to illustrate the effectiveness of the proposed controller.

  16. Local polynomial estimation of heteroscedasticity in a multivariate linear regression model and its applications in economics.

    PubMed

    Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan

    2012-01-01

    Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.

  17. Recursive approach to the moment-based phase unwrapping method.

    PubMed

    Langley, Jason A; Brice, Robert G; Zhao, Qun

    2010-06-01

    The moment-based phase unwrapping algorithm approximates the phase map as a product of Gegenbauer polynomials, but the weight function for the Gegenbauer polynomials generates artificial singularities along the edge of the phase map. A method is presented to remove the singularities inherent to the moment-based phase unwrapping algorithm by approximating the phase map as a product of two one-dimensional Legendre polynomials and applying a recursive property of derivatives of Legendre polynomials. The proposed phase unwrapping algorithm is tested on simulated and experimental data sets. The results are then compared to those of PRELUDE 2D, a widely used phase unwrapping algorithm, and a Chebyshev-polynomial-based phase unwrapping algorithm. It was found that the proposed phase unwrapping algorithm provides results that are comparable to those obtained by using PRELUDE 2D and the Chebyshev phase unwrapping algorithm.

  18. Weierstrass method for quaternionic polynomial root-finding

    NASA Astrophysics Data System (ADS)

    Falcão, M. Irene; Miranda, Fernando; Severino, Ricardo; Soares, M. Joana

    2018-01-01

    Quaternions, introduced by Hamilton in 1843 as a generalization of complex numbers, have found, in more recent years, a wealth of applications in a number of different areas which motivated the design of efficient methods for numerically approximating the zeros of quaternionic polynomials. In fact, one can find in the literature recent contributions to this subject based on the use of complex techniques, but numerical methods relying on quaternion arithmetic remain scarce. In this paper we propose a Weierstrass-like method for finding simultaneously {\\sl all} the zeros of unilateral quaternionic polynomials. The convergence analysis and several numerical examples illustrating the performance of the method are also presented.

  19. Analysis of actuator delay and its effect on uncertainty quantification for real-time hybrid simulation

    NASA Astrophysics Data System (ADS)

    Chen, Cheng; Xu, Weijie; Guo, Tong; Chen, Kai

    2017-10-01

    Uncertainties in structure properties can result in different responses in hybrid simulations. Quantification of the effect of these uncertainties would enable researchers to estimate the variances of structural responses observed from experiments. This poses challenges for real-time hybrid simulation (RTHS) due to the existence of actuator delay. Polynomial chaos expansion (PCE) projects the model outputs on a basis of orthogonal stochastic polynomials to account for influences of model uncertainties. In this paper, PCE is utilized to evaluate effect of actuator delay on the maximum displacement from real-time hybrid simulation of a single degree of freedom (SDOF) structure when accounting for uncertainties in structural properties. The PCE is first applied for RTHS without delay to determine the order of PCE, the number of sample points as well as the method for coefficients calculation. The PCE is then applied to RTHS with actuator delay. The mean, variance and Sobol indices are compared and discussed to evaluate the effects of actuator delay on uncertainty quantification for RTHS. Results show that the mean and the variance of the maximum displacement increase linearly and exponentially with respect to actuator delay, respectively. Sensitivity analysis through Sobol indices also indicates the influence of the single random variable decreases while the coupling effect increases with the increase of actuator delay.

  20. Chebyshev polynomials in the spectral Tau method and applications to Eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Johnson, Duane

    1996-01-01

    Chebyshev Spectral methods have received much attention recently as a technique for the rapid solution of ordinary differential equations. This technique also works well for solving linear eigenvalue problems. Specific detail is given to the properties and algebra of chebyshev polynomials; the use of chebyshev polynomials in spectral methods; and the recurrence relationships that are developed. These formula and equations are then applied to several examples which are worked out in detail. The appendix contains an example FORTRAN program used in solving an eigenvalue problem.

  1. Transfer matrix computation of critical polynomials for two-dimensional Potts models

    DOE PAGES

    Jacobsen, Jesper Lykke; Scullard, Christian R.

    2013-02-04

    We showed, In our previous work, that critical manifolds of the q-state Potts model can be studied by means of a graph polynomial P B(q, v), henceforth referred to as the critical polynomial. This polynomial may be defined on any periodic two-dimensional lattice. It depends on a finite subgraph B, called the basis, and the manner in which B is tiled to construct the lattice. The real roots v = e K — 1 of P B(q, v) either give the exact critical points for the lattice, or provide approximations that, in principle, can be made arbitrarily accurate by increasingmore » the size of B in an appropriate way. In earlier work, P B(q, v) was defined by a contraction-deletion identity, similar to that satisfied by the Tutte polynomial. Here, we give a probabilistic definition of P B(q, v), which facilitates its computation, using the transfer matrix, on much larger B than was previously possible.We present results for the critical polynomial on the (4, 8 2), kagome, and (3, 12 2) lattices for bases of up to respectively 96, 162, and 243 edges, compared to the limit of 36 edges with contraction-deletion. We discuss in detail the role of the symmetries and the embedding of B. The critical temperatures v c obtained for ferromagnetic (v > 0) Potts models are at least as precise as the best available results from Monte Carlo simulations or series expansions. For instance, with q = 3 we obtain v c(4, 8 2) = 3.742 489 (4), v c(kagome) = 1.876 459 7 (2), and v c(3, 12 2) = 5.033 078 49 (4), the precision being comparable or superior to the best simulation results. More generally, we trace the critical manifolds in the real (q, v) plane and discuss the intricate structure of the phase diagram in the antiferromagnetic (v < 0) region.« less

  2. Causal properties of nonlinear gravitational waves in modified gravity

    NASA Astrophysics Data System (ADS)

    Suvorov, Arthur George; Melatos, Andrew

    2017-09-01

    Some exact, nonlinear, vacuum gravitational wave solutions are derived for certain polynomial f (R ) gravities. We show that the boundaries of the gravitational domain of dependence, associated with events in polynomial f (R ) gravity, are not null as they are in general relativity. The implication is that electromagnetic and gravitational causality separate into distinct notions in modified gravity, which may have observable astrophysical consequences. The linear theory predicts that tachyonic instabilities occur, when the quadratic coefficient a2 of the Taylor expansion of f (R ) is negative, while the exact, nonlinear, cylindrical wave solutions presented here can be superluminal for all values of a2. Anisotropic solutions are found, whose wave fronts trace out time- or spacelike hypersurfaces with complicated geometric properties. We show that the solutions exist in f (R ) theories that are consistent with Solar System and pulsar timing experiments.

  3. a Unified Matrix Polynomial Approach to Modal Identification

    NASA Astrophysics Data System (ADS)

    Allemang, R. J.; Brown, D. L.

    1998-04-01

    One important current focus of modal identification is a reformulation of modal parameter estimation algorithms into a single, consistent mathematical formulation with a corresponding set of definitions and unifying concepts. Particularly, a matrix polynomial approach is used to unify the presentation with respect to current algorithms such as the least-squares complex exponential (LSCE), the polyreference time domain (PTD), Ibrahim time domain (ITD), eigensystem realization algorithm (ERA), rational fraction polynomial (RFP), polyreference frequency domain (PFD) and the complex mode indication function (CMIF) methods. Using this unified matrix polynomial approach (UMPA) allows a discussion of the similarities and differences of the commonly used methods. the use of least squares (LS), total least squares (TLS), double least squares (DLS) and singular value decomposition (SVD) methods is discussed in order to take advantage of redundant measurement data. Eigenvalue and SVD transformation methods are utilized to reduce the effective size of the resulting eigenvalue-eigenvector problem as well.

  4. On polynomial selection for the general number field sieve

    NASA Astrophysics Data System (ADS)

    Kleinjung, Thorsten

    2006-12-01

    The general number field sieve (GNFS) is the asymptotically fastest algorithm for factoring large integers. Its runtime depends on a good choice of a polynomial pair. In this article we present an improvement of the polynomial selection method of Montgomery and Murphy which has been used in recent GNFS records.

  5. Precision measurement of the η → π + π - π 0 Dalitz plot distribution with the KLOE detector

    NASA Astrophysics Data System (ADS)

    Anastasi, A.; Babusci, D.; Bencivenni, G.; Berlowski, M.; Bloise, C.; Bossi, F.; Branchini, P.; Budano, A.; Caldeira Balkeståhl, L.; Cao, B.; Ceradini, F.; Ciambrone, P.; Curciarello, F.; Czerwinski, E.; D'Agostini, G.; Danè, E.; De Leo, V.; De Lucia, E.; De Santis, A.; De Simone, P.; Di Cicco, A.; Di Domenico, A.; Di Salvo, R.; Domenici, D.; D'Uffizi, A.; Fantini, A.; Felici, G.; Fiore, S.; Gajos, A.; Gauzzi, P.; Giardina, G.; Giovannella, S.; Graziani, E.; Happacher, F.; Heijkenskjöld, L.; Ikegami Andersson, W.; Johansson, T.; Kaminska, D.; Krzemien, W.; Kupsc, A.; Loffredo, S.; Mandaglio, G.; Martini, M.; Mascolo, M.; Messi, R.; Miscetti, S.; Morello, G.; Moricciani, D.; Moskal, P.; Papenbrock, M.; Passeri, A.; Patera, V.; Perez del Rio, E.; Ranieri, A.; Santangelo, P.; Sarra, I.; Schioppa, M.; Silarski, M.; Sirghi, F.; Tortora, L.; Venanzoni, G.; Wislicki, W.; Wolke, M.

    2016-05-01

    Using 1.6 fb-1 of e + e - → ϕ → ηγ data collected with the KLOE detector at DAΦNE, the Dalitz plot distribution for the η → π + π - π 0 decay is studied with the world's largest sample of ˜ 4 .7 · 106 events. The Dalitz plot density is parametrized as a polynomial expansion up to cubic terms in the normalized dimensionless variables X and Y . The experiment is sensitive to all charge conjugation conserving terms of the expansion, including a gX 2 Y term. The statistical uncertainty of all parameters is improved by a factor two with respect to earlier measurements.

  6. A polynomial-chaos-expansion-based building block approach for stochastic analysis of photonic circuits

    NASA Astrophysics Data System (ADS)

    Waqas, Abi; Melati, Daniele; Manfredi, Paolo; Grassi, Flavia; Melloni, Andrea

    2018-02-01

    The Building Block (BB) approach has recently emerged in photonic as a suitable strategy for the analysis and design of complex circuits. Each BB can be foundry related and contains a mathematical macro-model of its functionality. As well known, statistical variations in fabrication processes can have a strong effect on their functionality and ultimately affect the yield. In order to predict the statistical behavior of the circuit, proper analysis of the uncertainties effects is crucial. This paper presents a method to build a novel class of Stochastic Process Design Kits for the analysis of photonic circuits. The proposed design kits directly store the information on the stochastic behavior of each building block in the form of a generalized-polynomial-chaos-based augmented macro-model obtained by properly exploiting stochastic collocation and Galerkin methods. Using this approach, we demonstrate that the augmented macro-models of the BBs can be calculated once and stored in a BB (foundry dependent) library and then used for the analysis of any desired circuit. The main advantage of this approach, shown here for the first time in photonics, is that the stochastic moments of an arbitrary photonic circuit can be evaluated by a single simulation only, without the need for repeated simulations. The accuracy and the significant speed-up with respect to the classical Monte Carlo analysis are verified by means of classical photonic circuit example with multiple uncertain variables.

  7. Low-Complexity Polynomial Channel Estimation in Large-Scale MIMO With Arbitrary Statistics

    NASA Astrophysics Data System (ADS)

    Shariati, Nafiseh; Bjornson, Emil; Bengtsson, Mats; Debbah, Merouane

    2014-10-01

    This paper considers pilot-based channel estimation in large-scale multiple-input multiple-output (MIMO) communication systems, also known as massive MIMO, where there are hundreds of antennas at one side of the link. Motivated by the fact that computational complexity is one of the main challenges in such systems, a set of low-complexity Bayesian channel estimators, coined Polynomial ExpAnsion CHannel (PEACH) estimators, are introduced for arbitrary channel and interference statistics. While the conventional minimum mean square error (MMSE) estimator has cubic complexity in the dimension of the covariance matrices, due to an inversion operation, our proposed estimators significantly reduce this to square complexity by approximating the inverse by a L-degree matrix polynomial. The coefficients of the polynomial are optimized to minimize the mean square error (MSE) of the estimate. We show numerically that near-optimal MSEs are achieved with low polynomial degrees. We also derive the exact computational complexity of the proposed estimators, in terms of the floating-point operations (FLOPs), by which we prove that the proposed estimators outperform the conventional estimators in large-scale MIMO systems of practical dimensions while providing a reasonable MSEs. Moreover, we show that L needs not scale with the system dimensions to maintain a certain normalized MSE. By analyzing different interference scenarios, we observe that the relative MSE loss of using the low-complexity PEACH estimators is smaller in realistic scenarios with pilot contamination. On the other hand, PEACH estimators are not well suited for noise-limited scenarios with high pilot power; therefore, we also introduce the low-complexity diagonalized estimator that performs well in this regime. Finally, we ...

  8. Two dimensional J-matrix approach to quantum scattering

    NASA Astrophysics Data System (ADS)

    Olumegbon, Ismail Adewale

    We present an extension of the J-matrix method of scattering to two dimensions in cylindrical coordinates. In the J-matrix approach we select a zeroth order Hamiltonian, H0, which is exactly solvable in the sense that we select a square integrable basis set that enable us to have an infinite tridiagonal representation for H0. Expanding the wavefunction in this basis makes the wave equation equivalent to a three-term recursion relation for the expansion coefficients. Consequently, finding solutions of the recursion relation is equivalent to solving the original H0 problem (i.e., determining the expansion coefficients of the system's wavefunction). The part of the original potential interaction which cannot be brought to an exact tridiagonal form is cut in an NxN basis space and its matrix elements are computed numerically using Gauss quadrature approach. Hence, this approach embodies powerful tools in the analysis of solutions of the wave equation by exploiting the intimate connection and interplay between tridiagonal matrices and the theory of orthogonal polynomials. In such analysis, one is at liberty to employ a wide range of well established methods and numerical techniques associated with these settings such as quadrature approximation and continued fractions. To demonstrate the utility, usefulness, and accuracy of the extended method we use it to obtain the bound states for an illustrative short range potential problem.

  9. Two dimensional J-matrix approach to quantum scattering

    NASA Astrophysics Data System (ADS)

    Olumegbon, Ismail Adewale

    2013-01-01

    We present an extension of the J-matrix method of scattering to two dimensions in cylindrical coordinates. In the J-matrix approach we select a zeroth order Hamiltonian, H0, which is exactly solvable in the sense that we select a square integrable basis set that enable us to have an infinite tridiagonal representation for H0. Expanding the wavefunction in this basis makes the wave equation equivalent to a three-term recursion relation for the expansion coefficients. Consequently, finding solutions of the recursion relation is equivalent to solving the original H0 problem (i.e., determining the expansion coefficients of the system's wavefunction). The part of the original potential interaction which cannot be brought to an exact tridiagonal form is cut in an NxN basis space and its matrix elements are computed numerically using Gauss quadrature approach. Hence, this approach embodies powerful tools in the analysis of solutions of the wave equation by exploiting the intimate connection and interplay between tridiagonal matrices and the theory of orthogonal polynomials. In such analysis, one is at liberty to employ a wide range of well established methods and numerical techniques associated with these settings such as quadrature approximation and continued fractions. To demonstrate the utility, usefulness, and accuracy of the extended method we use it to obtain the bound states for an illustrative short range potential problem.

  10. DAVIS: A direct algorithm for velocity-map imaging system

    NASA Astrophysics Data System (ADS)

    Harrison, G. R.; Vaughan, J. C.; Hidle, B.; Laurent, G. M.

    2018-05-01

    In this work, we report a direct (non-iterative) algorithm to reconstruct the three-dimensional (3D) momentum-space picture of any charged particles collected with a velocity-map imaging system from the two-dimensional (2D) projected image captured by a position-sensitive detector. The method consists of fitting the measured image with the 2D projection of a model 3D velocity distribution defined by the physics of the light-matter interaction. The meaningful angle-correlated information is first extracted from the raw data by expanding the image with a complete set of Legendre polynomials. Both the particle's angular and energy distributions are then directly retrieved from the expansion coefficients. The algorithm is simple, easy to implement, fast, and explicitly takes into account the pixelization effect in the measurement.

  11. Probe measurements of the electron velocity distribution function in beams: Low-voltage beam discharge in helium

    NASA Astrophysics Data System (ADS)

    Sukhomlinov, V.; Mustafaev, A.; Timofeev, N.

    2018-04-01

    Previously developed methods based on the single-sided probe technique are altered and applied to measure the anisotropic angular spread and narrow energy distribution functions of charged particle (electron and ion) beams. The conventional method is not suitable for some configurations, such as low-voltage beam discharges, electron beams accelerated in near-wall and near-electrode layers, and vacuum electron beam sources. To determine the range of applicability of the proposed method, simple algebraic relationships between the charged particle energies and their angular distribution are obtained. The method is verified for the case of the collisionless mode of a low-voltage He beam discharge, where the traditional method for finding the electron distribution function with the help of a Legendre polynomial expansion is not applicable. This leads to the development of a physical model of the formation of the electron distribution function in a collisionless low-voltage He beam discharge. The results of a numerical calculation based on Monte Carlo simulations are in good agreement with the experimental data obtained using the new method.

  12. Rigorous high-precision enclosures of fixed points and their invariant manifolds

    NASA Astrophysics Data System (ADS)

    Wittig, Alexander N.

    The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by Johannes Grote is extended to compute very accurate polynomial approximations to invariant manifolds of discrete maps of arbitrary dimension around hyperbolic fixed points. The algorithm presented allows for automatic removal of resonances occurring during construction. A method for the rigorous enclosure of invariant manifolds of continuous systems is introduced. Using methods developed for discrete maps, polynomial approximations of invariant manifolds of hyperbolic fixed points of ODEs are obtained. These approximations are outfit with a sharp error bound which is verified to rigorously contain the manifolds. While we focus on the three dimensional case, verification in higher dimensions is possible using similar techniques. Integrating the resulting enclosures using the verified COSY VI integrator, the initial manifold enclosures are expanded to yield sharp enclosures of large parts of the stable and unstable manifolds. To demonstrate the effectiveness of this method, we construct enclosures of the invariant manifolds of the Lorenz system and show pictures of the resulting manifold enclosures. To the best of our knowledge, these enclosures are the largest verified enclosures of manifolds in the Lorenz system in existence.

  13. An algorithmic approach to solving polynomial equations associated with quantum circuits

    NASA Astrophysics Data System (ADS)

    Gerdt, V. P.; Zinin, M. V.

    2009-12-01

    In this paper we present two algorithms for reducing systems of multivariate polynomial equations over the finite field F 2 to the canonical triangular form called lexicographical Gröbner basis. This triangular form is the most appropriate for finding solutions of the system. On the other hand, the system of polynomials over F 2 whose variables also take values in F 2 (Boolean polynomials) completely describes the unitary matrix generated by a quantum circuit. In particular, the matrix itself can be computed by counting the number of solutions (roots) of the associated polynomial system. Thereby, efficient construction of the lexicographical Gröbner bases over F 2 associated with quantum circuits gives a method for computing their circuit matrices that is alternative to the direct numerical method based on linear algebra. We compare our implementation of both algorithms with some other software packages available for computing Gröbner bases over F 2.

  14. Extinction time of a stochastic predator-prey model by the generalized cell mapping method

    NASA Astrophysics Data System (ADS)

    Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao

    2018-03-01

    The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.

  15. A simple low-computation-intensity model for approximating the distribution function of a sum of non-identical lognormals for financial applications

    NASA Astrophysics Data System (ADS)

    Messica, A.

    2016-10-01

    The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.

  16. A fractional factorial probabilistic collocation method for uncertainty propagation of hydrologic model parameters in a reduced dimensional space

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.

    2015-10-01

    In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.

  17. The NonConforming Virtual Element Method for the Stokes Equations

    DOE PAGES

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    2016-01-01

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  18. Local Composite Quantile Regression Smoothing for Harris Recurrent Markov Processes

    PubMed Central

    Li, Degui; Li, Runze

    2016-01-01

    In this paper, we study the local polynomial composite quantile regression (CQR) smoothing method for the nonlinear and nonparametric models under the Harris recurrent Markov chain framework. The local polynomial CQR regression method is a robust alternative to the widely-used local polynomial method, and has been well studied in stationary time series. In this paper, we relax the stationarity restriction on the model, and allow that the regressors are generated by a general Harris recurrent Markov process which includes both the stationary (positive recurrent) and nonstationary (null recurrent) cases. Under some mild conditions, we establish the asymptotic theory for the proposed local polynomial CQR estimator of the mean regression function, and show that the convergence rate for the estimator in nonstationary case is slower than that in stationary case. Furthermore, a weighted type local polynomial CQR estimator is provided to improve the estimation efficiency, and a data-driven bandwidth selection is introduced to choose the optimal bandwidth involved in the nonparametric estimators. Finally, we give some numerical studies to examine the finite sample performance of the developed methodology and theory. PMID:27667894

  19. Further Insight and Additional Inference Methods for Polynomial Regression Applied to the Analysis of Congruence

    ERIC Educational Resources Information Center

    Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti

    2010-01-01

    In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…

  20. Polynomial solutions of the Monge-Ampère equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aminov, Yu A

    2014-11-30

    The question of the existence of polynomial solutions to the Monge-Ampère equation z{sub xx}z{sub yy}−z{sub xy}{sup 2}=f(x,y) is considered in the case when f(x,y) is a polynomial. It is proved that if f is a polynomial of the second degree, which is positive for all values of its arguments and has a positive squared part, then no polynomial solution exists. On the other hand, a solution which is not polynomial but is analytic in the whole of the x, y-plane is produced. Necessary and sufficient conditions for the existence of polynomial solutions of degree up to 4 are found and methods for the construction ofmore » such solutions are indicated. An approximation theorem is proved. Bibliography: 10 titles.« less

  1. A polynomial based model for cell fate prediction in human diseases.

    PubMed

    Ma, Lichun; Zheng, Jie

    2017-12-21

    Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.

  2. Algorithms for computing solvents of unilateral second-order matrix polynomials over prime finite fields using lambda-matrices

    NASA Astrophysics Data System (ADS)

    Burtyka, Filipp

    2018-01-01

    The paper considers algorithms for finding diagonalizable and non-diagonalizable roots (so called solvents) of monic arbitrary unilateral second-order matrix polynomial over prime finite field. These algorithms are based on polynomial matrices (lambda-matrices). This is an extension of existing general methods for computing solvents of matrix polynomials over field of complex numbers. We analyze how techniques for complex numbers can be adapted for finite field and estimate asymptotic complexity of the obtained algorithms.

  3. Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series

    PubMed Central

    Zhang, Zhihua

    2014-01-01

    Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842

  4. On computing closed forms for summations. [polynomials and rational functions

    NASA Technical Reports Server (NTRS)

    Moenck, R.

    1977-01-01

    The problem of finding closed forms for a summation involving polynomials and rational functions is considered. A method closely related to Hermite's method for integration of rational functions derived. The method expresses the sum of a rational function as a rational function part and a transcendental part involving derivatives of the gamma function.

  5. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  6. An O(log sup 2 N) parallel algorithm for computing the eigenvalues of a symmetric tridiagonal matrix

    NASA Technical Reports Server (NTRS)

    Swarztrauber, Paul N.

    1989-01-01

    An O(log sup 2 N) parallel algorithm is presented for computing the eigenvalues of a symmetric tridiagonal matrix using a parallel algorithm for computing the zeros of the characteristic polynomial. The method is based on a quadratic recurrence in which the characteristic polynomial is constructed on a binary tree from polynomials whose degree doubles at each level. Intervals that contain exactly one zero are determined by the zeros of polynomials at the previous level which ensures that different processors compute different zeros. The exact behavior of the polynomials at the interval endpoints is used to eliminate the usual problems induced by finite precision arithmetic.

  7. Pulse transmission transmitter including a higher order time derivate filter

    DOEpatents

    Dress, Jr., William B.; Smith, Stephen F.

    2003-09-23

    Systems and methods for pulse-transmission low-power communication modes are disclosed. A pulse transmission transmitter includes: a clock; a pseudorandom polynomial generator coupled to the clock, the pseudorandom polynomial generator having a polynomial load input; an exclusive-OR gate coupled to the pseudorandom polynomial generator, the exclusive-OR gate having a serial data input; a programmable delay circuit coupled to both the clock and the exclusive-OR gate; a pulse generator coupled to the programmable delay circuit; and a higher order time derivative filter coupled to the pulse generator. The systems and methods significantly reduce lower-frequency emissions from pulse transmission spread-spectrum communication modes, which reduces potentially harmful interference to existing radio frequency services and users and also simultaneously permit transmission of multiple data bits by utilizing specific pulse shapes.

  8. On method of solving third-order ordinary differential equations directly using Bernstein polynomials

    NASA Astrophysics Data System (ADS)

    Khataybeh, S. N.; Hashim, I.

    2018-04-01

    In this paper, we propose for the first time a method based on Bernstein polynomials for solving directly a class of third-order ordinary differential equations (ODEs). This method gives a numerical solution by converting the equation into a system of algebraic equations which is solved directly. Some numerical examples are given to show the applicability of the method.

  9. Stabilization of numerical interchange in spectral-element magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sovinec, C. R.

    In this study, auxiliary numerical projections of the divergence of flow velocity and vorticity parallel to magnetic field are developed and tested for the purpose of suppressing unphysical interchange instability in magnetohydrodynamic simulations. The numerical instability arises with equal-order C 0 finite- and spectral-element expansions of the flow velocity, magnetic field, and pressure and is sensitive to behavior at the limit of resolution. The auxiliary projections are motivated by physical field-line bending, and coercive responses to the projections are added to the flow-velocity equation. Their incomplete expansions are limited to the highest-order orthogonal polynomial in at least one coordinate ofmore » the spectral elements. Cylindrical eigenmode computations show that the projections induce convergence from the stable side with first-order ideal-MHD equations during h-refinement and p-refinement. Hyperbolic and parabolic projections and responses are compared, together with different methods for avoiding magnetic divergence error. Lastly, the projections are also shown to be effective in linear and nonlinear time-dependent computations with the NIMROD code [C. R. Sovinec, et al., J. Comput. Phys. 195 (2004) 355-386], provided that the projections introduce numerical dissipation.« less

  10. Stabilization of numerical interchange in spectral-element magnetohydrodynamics

    DOE PAGES

    Sovinec, C. R.

    2016-05-10

    In this study, auxiliary numerical projections of the divergence of flow velocity and vorticity parallel to magnetic field are developed and tested for the purpose of suppressing unphysical interchange instability in magnetohydrodynamic simulations. The numerical instability arises with equal-order C 0 finite- and spectral-element expansions of the flow velocity, magnetic field, and pressure and is sensitive to behavior at the limit of resolution. The auxiliary projections are motivated by physical field-line bending, and coercive responses to the projections are added to the flow-velocity equation. Their incomplete expansions are limited to the highest-order orthogonal polynomial in at least one coordinate ofmore » the spectral elements. Cylindrical eigenmode computations show that the projections induce convergence from the stable side with first-order ideal-MHD equations during h-refinement and p-refinement. Hyperbolic and parabolic projections and responses are compared, together with different methods for avoiding magnetic divergence error. Lastly, the projections are also shown to be effective in linear and nonlinear time-dependent computations with the NIMROD code [C. R. Sovinec, et al., J. Comput. Phys. 195 (2004) 355-386], provided that the projections introduce numerical dissipation.« less

  11. Deterministic analysis of extrinsic and intrinsic noise in an epidemiological model.

    PubMed

    Bayati, Basil S

    2016-05-01

    We couple a stochastic collocation method with an analytical expansion of the canonical epidemiological master equation to analyze the effects of both extrinsic and intrinsic noise. It is shown that depending on the distribution of the extrinsic noise, the master equation yields quantitatively different results compared to using the expectation of the distribution for the stochastic parameter. This difference is incident to the nonlinear terms in the master equation, and we show that the deviation away from the expectation of the extrinsic noise scales nonlinearly with the variance of the distribution. The method presented here converges linearly with respect to the number of particles in the system and exponentially with respect to the order of the polynomials used in the stochastic collocation calculation. This makes the method presented here more accurate than standard Monte Carlo methods, which suffer from slow, nonmonotonic convergence. In epidemiological terms, the results show that extrinsic fluctuations should be taken into account since they effect the speed of disease breakouts and that the gamma distribution should be used to model the basic reproductive number.

  12. A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation

    NASA Astrophysics Data System (ADS)

    Qiang, Z.; Zeng, L.; Wu, L.

    2016-12-01

    Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.

  13. Strong stabilization servo controller with optimization of performance criteria.

    PubMed

    Sarjaš, Andrej; Svečko, Rajko; Chowdhury, Amor

    2011-07-01

    Synthesis of a simple robust controller with a pole placement technique and a H(∞) metrics is the method used for control of a servo mechanism with BLDC and BDC electric motors. The method includes solving a polynomial equation on the basis of the chosen characteristic polynomial using the Manabe standard polynomial form and parametric solutions. Parametric solutions are introduced directly into the structure of the servo controller. On the basis of the chosen parametric solutions the robustness of a closed-loop system is assessed through uncertainty models and assessment of the norm ‖•‖(∞). The design procedure and the optimization are performed with a genetic algorithm differential evolution - DE. The DE optimization method determines a suboptimal solution throughout the optimization on the basis of a spectrally square polynomial and Šiljak's absolute stability test. The stability of the designed controller during the optimization is being checked with Lipatov's stability condition. Both utilized approaches: Šiljak's test and Lipatov's condition, check the robustness and stability characteristics on the basis of the polynomial's coefficients, and are very convenient for automated design of closed-loop control and for application in optimization algorithms such as DE. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Accurate Estimation of Solvation Free Energy Using Polynomial Fitting Techniques

    PubMed Central

    Shyu, Conrad; Ytreberg, F. Marty

    2010-01-01

    This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem 30: 2297–2304, 2009). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and non-equidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest these polynomial techniques, especially with use of non-equidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. PMID:20623657

  15. On Partial Fraction Decompositions by Repeated Polynomial Divisions

    ERIC Educational Resources Information Center

    Man, Yiu-Kwong

    2017-01-01

    We present a method for finding partial fraction decompositions of rational functions with linear or quadratic factors in the denominators by means of repeated polynomial divisions. This method does not involve differentiation or solving linear equations for obtaining the unknown partial fraction coefficients, which is very suitable for either…

  16. Chaotic Expansions of Elements of the Universal Enveloping Superalgebra Associated with a Z2-graded Quantum Stochastic Calculus

    NASA Astrophysics Data System (ADS)

    Eyre, T. M. W.

    Given a polynomial function f of classical stochastic integrator processes whose differentials satisfy a closed Ito multiplication table, we can express the stochastic derivative of f as We establish an analogue of this formula in the form of a chaotic decomposition for Z2-graded theories of quantum stochastic calculus based on the natural coalgebra structure of the universal enveloping superalgebra.

  17. Assessing an ensemble Kalman filter inference of Manning's n coefficient of an idealized tidal inlet against a polynomial chaos-based MCMC

    NASA Astrophysics Data System (ADS)

    Siripatana, Adil; Mayo, Talea; Sraj, Ihab; Knio, Omar; Dawson, Clint; Le Maitre, Olivier; Hoteit, Ibrahim

    2017-08-01

    Bayesian estimation/inversion is commonly used to quantify and reduce modeling uncertainties in coastal ocean model, especially in the framework of parameter estimation. Based on Bayes rule, the posterior probability distribution function (pdf) of the estimated quantities is obtained conditioned on available data. It can be computed either directly, using a Markov chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation approach, which is heavily exploited in large dimensional state estimation problems. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach due to the restricted Gaussian prior and noise assumptions that are generally imposed in these methods. This contribution aims at evaluating the effectiveness of utilizing an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model with respect to the Manning's n coefficients. Based on a realistic framework of observation system simulation experiments (OSSEs), we apply an ensemble Kalman filter and the MCMC method employing a surrogate of ADCIRC constructed by a non-intrusive PC expansion for evaluating the likelihood, and test both approaches under identical scenarios. We study the sensitivity of the estimated posteriors with respect to the parameters of the inference methods, including ensemble size, inflation factor, and PC order. A full analysis of both methods, in the context of coastal ocean model, suggests that an ensemble Kalman filter with appropriate ensemble size and well-tuned inflation provides reliable mean estimates and uncertainties of Manning's n coefficients compared to the full posterior distributions inferred by MCMC.

  18. CBR anisotropy from primordial gravitational waves in inflationary cosmologies

    NASA Astrophysics Data System (ADS)

    Allen, Bruce; Koranda, Scott

    1994-09-01

    We examine stochastic temperature fluctuations of the cosmic background radiation (CBR) arising via the Sachs-Wolfe effect from gravitational wave perturbations produced in the early Universe. These temperature fluctuations are described by an angular correlation function C(γ). A new (more concise and general) derivation of C(γ) is given, and evaluated for inflationary-universe cosmologies. This yields standard results for angles γ greater than a few degrees, but new results for smaller angles, because we do not make standard long-wavelength approximations to the gravitational wave mode functions. The function C(γ) may be expanded in a series of Legendre polynomials; we use numerical methods to compare the coefficients of the resulting expansion in our exact calculation with standard (approximate) results. We also report some progress towards finding a closed form expression for C(γ).

  19. Viscous, resistive MHD stability computed by spectral techniques

    NASA Technical Reports Server (NTRS)

    Dahlburg, R. B.; Zang, T. A.; Montgomery, D.; Hussaini, M. Y.

    1983-01-01

    Expansions in Chebyshev polynomials are used to study the linear stability of one dimensional magnetohydrodynamic (MHD) quasi-equilibria, in the presence of finite resistivity and viscosity. The method is modeled on the one used by Orszag in accurate computation of solutions of the Orr-Sommerfeld equation. Two Reynolds like numbers involving Alfven speeds, length scales, kinematic viscosity, and magnetic diffusivity govern the stability boundaries, which are determined by the geometric mean of the two Reynolds like numbers. Marginal stability curves, growth rates versus Reynolds like numbers, and growth rates versus parallel wave numbers are exhibited. A numerical result which appears general is that instability was found to be associated with inflection points in the current profile, though no general analytical proof has emerged. It is possible that nonlinear subcritical three dimensional instabilities may exist, similar to those in Poiseuille and Couette flow.

  20. Stefan-Maxwell Relations and Heat Flux with Anisotropic Transport Coefficients for Ionized Gases in a Magnetic Field with Application to the Problem of Ambipolar Diffusion

    NASA Astrophysics Data System (ADS)

    Kolesnichenko, A. V.; Marov, M. Ya.

    2018-01-01

    The defining relations for the thermodynamic diffusion and heat fluxes in a multicomponent, partially ionized gas mixture in an external electromagnetic field have been obtained by the methods of the kinetic theory. Generalized Stefan-Maxwell relations and algebraic equations for anisotropic transport coefficients (the multicomponent diffusion, thermal diffusion, electric and thermoelectric conductivity coefficients as well as the thermal diffusion ratios) associated with diffusion-thermal processes have been derived. The defining second-order equations are derived by the Chapman-Enskog procedure using Sonine polynomial expansions. The modified Stefan-Maxwell relations are used for the description of ambipolar diffusion in the Earth's ionospheric plasma (in the F region) composed of electrons, ions of many species, and neutral particles in a strong electromagnetic field.

  1. Polynomial dual energy inverse functions for bone Calcium/Phosphorus ratio determination and experimental evaluation.

    PubMed

    Sotiropoulou, P; Fountos, G; Martini, N; Koukou, V; Michail, C; Kandarakis, I; Nikiforidis, G

    2016-12-01

    An X-ray dual energy (XRDE) method was examined, using polynomial nonlinear approximation of inverse functions for the determination of the bone Calcium-to-Phosphorus (Ca/P) mass ratio. Inverse fitting functions with the least-squares estimation were used, to determine calcium and phosphate thicknesses. The method was verified by measuring test bone phantoms with a dedicated dual energy system and compared with previously published dual energy data. The accuracy in the determination of the calcium and phosphate thicknesses improved with the polynomial nonlinear inverse function method, introduced in this work, (ranged from 1.4% to 6.2%), compared to the corresponding linear inverse function method (ranged from 1.4% to 19.5%). Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Minimizing Higgs potentials via numerical polynomial homotopy continuation

    NASA Astrophysics Data System (ADS)

    Maniatis, M.; Mehta, D.

    2012-08-01

    The study of models with extended Higgs sectors requires to minimize the corresponding Higgs potentials, which is in general very difficult. Here, we apply a recently developed method, called numerical polynomial homotopy continuation (NPHC), which guarantees to find all the stationary points of the Higgs potentials with polynomial-like non-linearity. The detection of all stationary points reveals the structure of the potential with maxima, metastable minima, saddle points besides the global minimum. We apply the NPHC method to the most general Higgs potential having two complex Higgs-boson doublets and up to five real Higgs-boson singlets. Moreover the method is applicable to even more involved potentials. Hence the NPHC method allows to go far beyond the limits of the Gröbner basis approach.

  3. A Subspace Semi-Definite programming-based Underestimation (SSDU) method for stochastic global optimization in protein docking*

    PubMed Central

    Nan, Feng; Moghadasi, Mohammad; Vakili, Pirooz; Vajda, Sandor; Kozakov, Dima; Ch. Paschalidis, Ioannis

    2015-01-01

    We propose a new stochastic global optimization method targeting protein docking problems. The method is based on finding a general convex polynomial underestimator to the binding energy function in a permissive subspace that possesses a funnel-like structure. We use Principal Component Analysis (PCA) to determine such permissive subspaces. The problem of finding the general convex polynomial underestimator is reduced into the problem of ensuring that a certain polynomial is a Sum-of-Squares (SOS), which can be done via semi-definite programming. The underestimator is then used to bias sampling of the energy function in order to recover a deep minimum. We show that the proposed method significantly improves the quality of docked conformations compared to existing methods. PMID:25914440

  4. Study of the Influence of the Orientation of a 50-Hz Magnetic Field on Fetal Exposure Using Polynomial Chaos Decomposition

    PubMed Central

    Liorni, Ilaria; Parazzini, Marta; Fiocchi, Serena; Ravazzani, Paolo

    2015-01-01

    Human exposure modelling is a complex topic, because in a realistic exposure scenario, several parameters (e.g., the source, the orientation of incident fields, the morphology of subjects) vary and influence the dose. Deterministic dosimetry, so far used to analyze human exposure to electromagnetic fields (EMF), is highly time consuming if the previously-mentioned variations are considered. Stochastic dosimetry is an alternative method to build analytical approximations of exposure at a lower computational cost. In this study, it was used to assess the influence of magnetic flux density (B) orientation on fetal exposure at 50 Hz by polynomial chaos (PC). A PC expansion of induced electric field (E) in each fetal tissue at different gestational ages (GA) was built as a function of B orientation. Maximum E in each fetal tissue and at each GA was estimated for different exposure configurations and compared with the limits of the International Commission of Non-Ionising Radiation Protection (ICNIRP) Guidelines 2010. PC theory resulted in an efficient tool to build accurate approximations of E in each fetal tissue. B orientation strongly influenced E, with a variability across tissues from 10% to 43% with respect to the mean value. However, varying B orientation, maximum E in each fetal tissue was below the limits of ICNIRP 2010 at all GAs. PMID:26024363

  5. Study of the influence of the orientation of a 50-Hz magnetic field on fetal exposure using polynomial chaos decomposition.

    PubMed

    Liorni, Ilaria; Parazzini, Marta; Fiocchi, Serena; Ravazzani, Paolo

    2015-05-27

    Human exposure modelling is a complex topic, because in a realistic exposure scenario, several parameters (e.g., the source, the orientation of incident fields, the morphology of subjects) vary and influence the dose. Deterministic dosimetry, so far used to analyze human exposure to electromagnetic fields (EMF), is highly time consuming if the previously-mentioned variations are considered. Stochastic dosimetry is an alternative method to build analytical approximations of exposure at a lower computational cost. In this study, it was used to assess the influence of magnetic flux density (B) orientation on fetal exposure at 50 Hz by polynomial chaos (PC). A PC expansion of induced electric field (E) in each fetal tissue at different gestational ages (GA) was built as a function of B orientation. Maximum E in each fetal tissue and at each GA was estimated for different exposure configurations and compared with the limits of the International Commission of Non-Ionising Radiation Protection (ICNIRP) Guidelines 2010. PC theory resulted in an efficient tool to build accurate approximations of E in each fetal tissue. B orientation strongly influenced E, with a variability across tissues from 10% to 43% with respect to the mean value. However, varying B orientation, maximum E in each fetal tissue was below the limits of ICNIRP 2010 at all GAs.

  6. Why High-Order Polynomials Should Not Be Used in Regression Discontinuity Designs. NBER Working Paper No. 20405

    ERIC Educational Resources Information Center

    Gelman, Andrew; Imbens, Guido

    2014-01-01

    It is common in regression discontinuity analysis to control for high order (third, fourth, or higher) polynomials of the forcing variable. We argue that estimators for causal effects based on such methods can be misleading, and we recommend researchers do not use them, and instead use estimators based on local linear or quadratic polynomials or…

  7. Frequency domain system identification methods - Matrix fraction description approach

    NASA Technical Reports Server (NTRS)

    Horta, Luca G.; Juang, Jer-Nan

    1993-01-01

    This paper presents the use of matrix fraction descriptions for least-squares curve fitting of the frequency spectra to compute two matrix polynomials. The matrix polynomials are intermediate step to obtain a linearized representation of the experimental transfer function. Two approaches are presented: first, the matrix polynomials are identified using an estimated transfer function; second, the matrix polynomials are identified directly from the cross/auto spectra of the input and output signals. A set of Markov parameters are computed from the polynomials and subsequently realization theory is used to recover a minimum order state space model. Unevenly spaced frequency response functions may be used. Results from a simple numerical example and an experiment are discussed to highlight some of the important aspect of the algorithm.

  8. Tutte polynomial in functional magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    García-Castillón, Marlly V.

    2015-09-01

    Methods of graph theory are applied to the processing of functional magnetic resonance images. Specifically the Tutte polynomial is used to analyze such kind of images. Functional Magnetic Resonance Imaging provide us connectivity networks in the brain which are represented by graphs and the Tutte polynomial will be applied. The problem of computing the Tutte polynomial for a given graph is #P-hard even for planar graphs. For a practical application the maple packages "GraphTheory" and "SpecialGraphs" will be used. We will consider certain diagram which is depicting functional connectivity, specifically between frontal and posterior areas, in autism during an inferential text comprehension task. The Tutte polynomial for the resulting neural networks will be computed and some numerical invariants for such network will be obtained. Our results show that the Tutte polynomial is a powerful tool to analyze and characterize the networks obtained from functional magnetic resonance imaging.

  9. A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE

    NASA Technical Reports Server (NTRS)

    Truong, T. K.

    1994-01-01

    This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.

  10. Elasticity solutions for a class of composite laminate problems with stress singularities

    NASA Technical Reports Server (NTRS)

    Wang, S. S.

    1983-01-01

    A study on the fundamental mechanics of fiber-reinforced composite laminates with stress singularities is presented. Based on the theory of anisotropic elasticity and Lekhnitskii's complex-variable stress potentials, a system of coupled governing partial differential equations are established. An eigenfunction expansion method is introduced to determine the orders of stress singularities in composite laminates with various geometric configurations and material systems. Complete elasticity solutions are obtained for this class of singular composite laminate mechanics problems. Homogeneous solutions in eigenfunction series and particular solutions in polynomials are presented for several cases of interest. Three examples are given to illustrate the method of approach and the basic nature of the singular laminate elasticity solutions. The first problem is the well-known laminate free-edge stress problem, which has a rather weak stress singularity. The second problem is the important composite delamination problem, which has a strong crack-tip stress singularity. The third problem is the commonly encountered bonded composite joints, which has a complex solution structure with moderate orders of stress singularities.

  11. Kodiak: An Implementation Framework for Branch and Bound Algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Andrew P.; Munoz, Cesar A.; Narkawicz, Anthony J.; Markevicius, Mantas

    2015-01-01

    Recursive branch and bound algorithms are often used to refine and isolate solutions to several classes of global optimization problems. A rigorous computation framework for the solution of systems of equations and inequalities involving nonlinear real arithmetic over hyper-rectangular variable and parameter domains is presented. It is derived from a generic branch and bound algorithm that has been formally verified, and utilizes self-validating enclosure methods, namely interval arithmetic and, for polynomials and rational functions, Bernstein expansion. Since bounds computed by these enclosure methods are sound, this approach may be used reliably in software verification tools. Advantage is taken of the partial derivatives of the constraint functions involved in the system, firstly to reduce the branching factor by the use of bisection heuristics and secondly to permit the computation of bifurcation sets for systems of ordinary differential equations. The associated software development, Kodiak, is presented, along with examples of three different branch and bound problem types it implements.

  12. Cylinder stitching interferometry: with and without overlap regions

    NASA Astrophysics Data System (ADS)

    Peng, Junzheng; Chen, Dingfu; Yu, Yingjie

    2017-06-01

    Since the cylinder surface is closed and periodic in the azimuthal direction, existing stitching methods cannot be used to yield the 360° form map. To address this problem, this paper presents two methods for stitching interferometry of cylinder: one requires overlap regions, and the other does not need the overlap regions. For the former, we use the first order approximation of cylindrical coordinate transformation to build the stitching model. With it, the relative parameters between the adjacent sub-apertures can be calculated by the stitching model. For the latter, a set of orthogonal polynomials, termed Legendre Fourier (LF) polynomials, was developed. With these polynomials, individual sub-aperture data can be expanded as composition of inherent form of partial cylinder surface and additional misalignment parameters. Then the 360° form map can be acquired by simultaneously fitting all sub-aperture data with LF polynomials. Finally the two proposed methods are compared under various conditions. The merits and drawbacks of each stitching method are consequently revealed to provide suggestion in acquisition of 360° form map for a precision cylinder.

  13. Semiparametric methods for estimation of a nonlinear exposure-outcome relationship using instrumental variables with application to Mendelian randomization.

    PubMed

    Staley, James R; Burgess, Stephen

    2017-05-01

    Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure-outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure-outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure-outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.

  14. Semiparametric methods for estimation of a nonlinear exposure‐outcome relationship using instrumental variables with application to Mendelian randomization

    PubMed Central

    Staley, James R.

    2017-01-01

    ABSTRACT Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure‐outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure‐outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure‐outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. PMID:28317167

  15. Trajectory Optimization for Helicopter Unmanned Aerial Vehicles (UAVs)

    DTIC Science & Technology

    2010-06-01

    the Nth-order derivative of the Legendre Polynomial ( )NL t . Using this method, the range of integration is transformed universally to [-1,+1...which is the interval for Legendre Polynomials . Although the LGL interpolation points are not evenly spaced, they are symmetric about the midpoint 0...the vehicle’s kinematic constraints are parameterized in terms of polynomials of sufficient order, (2) A collision-free criterion is developed and

  16. Quantum models with energy-dependent potentials solvable in terms of exceptional orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulze-Halberg, Axel, E-mail: axgeschu@iun.edu; Department of Physics, Indiana University Northwest, 3400 Broadway, Gary IN 46408; Roy, Pinaki, E-mail: pinaki@isical.ac.in

    We construct energy-dependent potentials for which the Schrödinger equations admit solutions in terms of exceptional orthogonal polynomials. Our method of construction is based on certain point transformations, applied to the equations of exceptional Hermite, Jacobi and Laguerre polynomials. We present several examples of boundary-value problems with energy-dependent potentials that admit a discrete spectrum and the corresponding normalizable solutions in closed form.

  17. Minimal Polynomial Method for Estimating Parameters of Signals Received by an Antenna Array

    NASA Astrophysics Data System (ADS)

    Ermolaev, V. T.; Flaksman, A. G.; Elokhin, A. V.; Kuptsov, V. V.

    2018-01-01

    The effectiveness of the projection minimal polynomial method for solving the problem of determining the number of sources of signals acting on an antenna array (AA) with an arbitrary configuration and their angular directions has been studied. The method proposes estimating the degree of the minimal polynomial of the correlation matrix (CM) of the input process in the AA on the basis of a statistically validated root-mean-square criterion. Special attention is paid to the case of the ultrashort sample of the input process when the number of samples is considerably smaller than the number of AA elements, which is important for multielement AAs. It is shown that the proposed method is more effective in this case than methods based on the AIC (Akaike's Information Criterion) or minimum description length (MDL) criterion.

  18. Polynomial mixture method of solving ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Shahrir, Mohammad Shazri; Nallasamy, Kumaresan; Ratnavelu, Kuru; Kamali, M. Z. M.

    2017-11-01

    In this paper, a numerical solution of fuzzy quadratic Riccati differential equation is estimated using a proposed new approach that provides mixture of polynomials where iteratively the right mixture will be generated. This mixture provide a generalized formalism of traditional Neural Networks (NN). Previous works have shown reliable results using Runge-Kutta 4th order (RK4). This can be achieved by solving the 1st Order Non-linear Differential Equation (ODE) that is found commonly in Riccati differential equation. Research has shown improved results relatively to the RK4 method. It can be said that Polynomial Mixture Method (PMM) shows promising results with the advantage of continuous estimation and improved accuracy that can be produced over Mabood et al, RK-4, Multi-Agent NN and Neuro Method (NM).

  19. Charactering baseline shift with 4th polynomial function for portable biomedical near-infrared spectroscopy device

    NASA Astrophysics Data System (ADS)

    Zhao, Ke; Ji, Yaoyao; Pan, Boan; Li, Ting

    2018-02-01

    The continuous-wave Near-infrared spectroscopy (NIRS) devices have been highlighted for its clinical and health care applications in noninvasive hemodynamic measurements. The baseline shift of the deviation measurement attracts lots of attentions for its clinical importance. Nonetheless current published methods have low reliability or high variability. In this study, we found a perfect polynomial fitting function for baseline removal, using NIRS. Unlike previous studies on baseline correction for near-infrared spectroscopy evaluation of non-hemodynamic particles, we focused on baseline fitting and corresponding correction method for NIRS and found that the polynomial fitting function at 4th order is greater than the function at 2nd order reported in previous research. Through experimental tests of hemodynamic parameters of the solid phantom, we compared the fitting effect between the 4th order polynomial and the 2nd order polynomial, by recording and analyzing the R values and the SSE (the sum of squares due to error) values. The R values of the 4th order polynomial function fitting are all higher than 0.99, which are significantly higher than the corresponding ones of 2nd order, while the SSE values of the 4th order are significantly smaller than the corresponding ones of the 2nd order. By using the high-reliable and low-variable 4th order polynomial fitting function, we are able to remove the baseline online to obtain more accurate NIRS measurements.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  1. Influence of surface error on electromagnetic performance of reflectors based on Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Li, Tuanjie; Shi, Jiachen; Tang, Yaqiong

    2018-04-01

    This paper investigates the influence of surface error distribution on the electromagnetic performance of antennas. The normalized Zernike polynomials are used to describe a smooth and continuous deformation surface. Based on the geometrical optics and piecewise linear fitting method, the electrical performance of reflector described by the Zernike polynomials is derived to reveal the relationship between surface error distribution and electromagnetic performance. Then the relation database between surface figure and electric performance is built for ideal and deformed surfaces to realize rapidly calculation of far-field electric performances. The simulation analysis of the influence of Zernike polynomials on the electrical properties for the axis-symmetrical reflector with the axial mode helical antenna as feed is further conducted to verify the correctness of the proposed method. Finally, the influence rules of surface error distribution on electromagnetic performance are summarized. The simulation results show that some terms of Zernike polynomials may decrease the amplitude of main lobe of antenna pattern, and some may reduce the pointing accuracy. This work extracts a new concept for reflector's shape adjustment in manufacturing process.

  2. Membrane triangles with corner drilling freedoms. I - The EFF element

    NASA Technical Reports Server (NTRS)

    Alvin, Ken; De La Fuente, Horacio M.; Haugen, Bjorn; Felippa, Carlos A.

    1992-01-01

    The formulation of 3-node 9-DOF membrane elements with normal-to-element-plane rotations (drilling freedoms) is examined in the context of parametrized variational principles. In particular, attention is given to the application of the extended free formulation (EFF) to the construction of a triangular membrane element with drilling freedoms that initially has complete quadratic polynomial expansions in each displacement component. The main advantage of the EFF over the free formulation triangle is that an explicit form is obtained for the higher-order stiffness.

  3. Switching probability of all-perpendicular spin valve nanopillars

    NASA Astrophysics Data System (ADS)

    Tzoufras, M.

    2018-05-01

    In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.

  4. Aided target recognition processing of MUDSS sonar data

    NASA Astrophysics Data System (ADS)

    Lau, Brian; Chao, Tien-Hsin

    1998-09-01

    The Mobile Underwater Debris Survey System (MUDSS) is a collaborative effort by the Navy and the Jet Propulsion Lab to demonstrate multi-sensor, real-time, survey of underwater sites for ordnance and explosive waste (OEW). We describe the sonar processing algorithm, a novel target recognition algorithm incorporating wavelets, morphological image processing, expansion by Hermite polynomials, and neural networks. This algorithm has found all planted targets in MUDSS tests and has achieved spectacular success upon another Coastal Systems Station (CSS) sonar image database.

  5. Solution of Fifth-order Korteweg and de Vries Equation by Homotopy perturbation Transform Method using He's Polynomial

    NASA Astrophysics Data System (ADS)

    Sharma, Dinkar; Singh, Prince; Chauhan, Shubha

    2017-06-01

    In this paper, a combined form of the Laplace transform method with the homotopy perturbation method is applied to solve nonlinear fifth order Korteweg de Vries (KdV) equations. The method is known as homotopy perturbation transform method (HPTM). The nonlinear terms can be easily handled by the use of He's polynomials. Two test examples are considered to illustrate the present scheme. Further the results are compared with Homotopy perturbation method (HPM).

  6. Analytical approximate solutions for a general class of nonlinear delay differential equations.

    PubMed

    Căruntu, Bogdan; Bota, Constantin

    2014-01-01

    We use the polynomial least squares method (PLSM), which allows us to compute analytical approximate polynomial solutions for a very general class of strongly nonlinear delay differential equations. The method is tested by computing approximate solutions for several applications including the pantograph equations and a nonlinear time-delay model from biology. The accuracy of the method is illustrated by a comparison with approximate solutions previously computed using other methods.

  7. Temperature dependent lattice constant of InSb above room temperature

    NASA Astrophysics Data System (ADS)

    Breivik, Magnus; Nilsen, Tron Arne; Fimland, Bjørn-Ove

    2013-10-01

    Using temperature dependent X-ray diffraction on two InSb single crystalline substrates, the bulk lattice constant of InSb was determined between 32 and 325 °C. A polynomial function was fitted to the data: a(T)=6.4791+3.28×10-5×T+1.02×10-8×T2 Å (T in °C), which gives slightly higher values than previously published (which go up to 62 °C). From the fit, the thermal expansion of InSb was calculated to be α(T)=5.062×10-6+3.15×10-9×T K-1 (T in °C). We found that the thermal expansion coefficient is higher than previously published values above 100 °C (more than 10% higher at 325 °C).

  8. A Riemann-Hilbert approach to asymptotic questions for orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Deift, P.; Kriecherbauer, T.; McLaughlin, K. T.-R.; Venakides, S.; Zhou, X.

    2001-08-01

    A few years ago the authors introduced a new approach to study asymptotic questions for orthogonal polynomials. In this paper we give an overview of our method and review the results which have been obtained in Deift et al. (Internat. Math. Res. Notices (1997) 759, Comm. Pure Appl. Math. 52 (1999) 1491, 1335), Deift (Orthogonal Polynomials and Random Matrices: A Riemann-Hilbert Approach, Courant Lecture Notes, Vol. 3, New York University, 1999), Kriecherbauer and McLaughlin (Internat. Math. Res. Notices (1999) 299) and Baik et al. (J. Amer. Math. Soc. 12 (1999) 1119). We mainly consider orthogonal polynomials with respect to weights on the real line which are either (1) Freud-type weights d[alpha](x)=e-Q(x) dx (Q polynomial or Q(x)=x[beta], [beta]>0), or (2) varying weights d[alpha]n(x)=e-nV(x) dx (V analytic, limx-->[infinity] V(x)/logx=[infinity]). We obtain Plancherel-Rotach-type asymptotics in the entire complex plane as well as asymptotic formulae with error estimates for the leading coefficients, for the recurrence coefficients, and for the zeros of the orthogonal polynomials. Our proof starts from an observation of Fokas et al. (Comm. Math. Phys. 142 (1991) 313) that the orthogonal polynomials can be determined as solutions of certain matrix valued Riemann-Hilbert problems. We analyze the Riemann-Hilbert problems by a steepest descent type method introduced by Deift and Zhou (Ann. Math. 137 (1993) 295) and further developed in Deift and Zhou (Comm. Pure Appl. Math. 48 (1995) 277) and Deift et al. (Proc. Nat. Acad. Sci. USA 95 (1998) 450). A crucial step in our analysis is the use of the well-known equilibrium measure which describes the asymptotic distribution of the zeros of the orthogonal polynomials.

  9. Rapid computation of photoacoustic fields from normal and pathological red blood cells using a Green's function method

    NASA Astrophysics Data System (ADS)

    Saha, Ratan K.; Fadhel, Muhannad N.; Lawrence, Aamna; Karmakar, Subhajit; Adhikari, Arunabha; Kolios, Michael C.

    2017-03-01

    Photoacoustic (PA) field calculations using a Green's function approach is presented. The method has been applied to predict PA spectra generated by normal (discocyte) and pathological (stomatocyte) red blood cells (RBCs). The contours of normal and pathological RBCs were generated by employing a popular parametric model and accordingly, fitted with the Legendre polynomial expansions for surface parametrization. The first frequency minimum of theoretical PA spectrum approximately appears at 607 MHz for a discocyte and 410 MHz for a stomatocyte when computed from the direction of symmetry axis. The same feature occurs nearly at 247 and 331 MHz, respectively, for those particles when measured along the perpendicular direction. The average experimental spectrum for normal RBCs is found to be flat over a bandwidth of 150-500 MHz when measured along the direction of symmetry axis. For spherical RBCs, both the theoretical and experimental spectra demonstrate negative slope over a bandwidth of 250-500 MHz. Using the Green's function method discussed, it may be possible to rapidly characterize cellular morphology from single-particle PA spectra.

  10. Computation of Temperature-Dependent Legendre Moments of a Double-Differential Elastic Cross Section

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arbanas, Goran; Dunn, Michael E; Larson, Nancy M

    2011-01-01

    A general expression for temperature-dependent Legendre moments of a double-differential elastic scattering cross section was derived by Ouisloumen and Sanchez [Nucl. Sci. Eng. 107, 189-200 (1991)]. Attempts to compute this expression are hindered by the three-fold nested integral, limiting their practical application to just the zeroth Legendre moment of an isotropic scattering. It is shown that the two innermost integrals could be evaluated analytically to all orders of Legendre moments, and for anisotropic scattering, by a recursive application of the integration by parts method. For this method to work, the anisotropic angular distribution in the center of mass is expressedmore » as an expansion in Legendre polynomials. The first several Legendre moments of elastic scattering of neutrons on U-238 are computed at T=1000 K at incoming energy 6.5 eV for isotropic scattering in the center of mass frame. Legendre moments of the anisotropic angular distribution given via Blatt-Biedenharn coefficients are computed at ~1 keV. The results are in agreement with those computed by the Monte Carlo method.« less

  11. multiUQ: An intrusive uncertainty quantification tool for gas-liquid multiphase flows

    NASA Astrophysics Data System (ADS)

    Turnquist, Brian; Owkes, Mark

    2017-11-01

    Uncertainty quantification (UQ) can improve our understanding of the sensitivity of gas-liquid multiphase flows to variability about inflow conditions and fluid properties, creating a valuable tool for engineers. While non-intrusive UQ methods (e.g., Monte Carlo) are simple and robust, the cost associated with these techniques can render them unrealistic. In contrast, intrusive UQ techniques modify the governing equations by replacing deterministic variables with stochastic variables, adding complexity, but making UQ cost effective. Our numerical framework, called multiUQ, introduces an intrusive UQ approach for gas-liquid flows, leveraging a polynomial chaos expansion of the stochastic variables: density, momentum, pressure, viscosity, and surface tension. The gas-liquid interface is captured using a conservative level set approach, including a modified reinitialization equation which is robust and quadrature free. A least-squares method is leveraged to compute the stochastic interface normal and curvature needed in the continuum surface force method for surface tension. The solver is tested by applying uncertainty to one or two variables and verifying results against the Monte Carlo approach. NSF Grant #1511325.

  12. Transport of phase space densities through tetrahedral meshes using discrete flow mapping

    NASA Astrophysics Data System (ADS)

    Bajars, Janis; Chappell, David J.; Søndergaard, Niels; Tanner, Gregor

    2017-01-01

    Discrete flow mapping was recently introduced as an efficient ray based method determining wave energy distributions in complex built up structures. Wave energy densities are transported along ray trajectories through polygonal mesh elements using a finite dimensional approximation of a ray transfer operator. In this way the method can be viewed as a smoothed ray tracing method defined over meshed surfaces. Many applications require the resolution of wave energy distributions in three-dimensional domains, such as in room acoustics, underwater acoustics and for electromagnetic cavity problems. In this work we extend discrete flow mapping to three-dimensional domains by propagating wave energy densities through tetrahedral meshes. The geometric simplicity of the tetrahedral mesh elements is utilised to efficiently compute the ray transfer operator using a mixture of analytic and spectrally accurate numerical integration. The important issue of how to choose a suitable basis approximation in phase space whilst maintaining a reasonable computational cost is addressed via low order local approximations on tetrahedral faces in the position coordinate and high order orthogonal polynomial expansions in momentum space.

  13. Atomic Radius and Charge Parameter Uncertainty in Biomolecular Solvation Energy Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Gao, Peiyuan

    Atomic radii and charges are two major parameters used in implicit solvent electrostatics and energy calculations. The optimization problem for charges and radii is under-determined, leading to uncertainty in the values of these parameters and in the results of solvation energy calculations using these parameters. This paper presents a method for quantifying this uncertainty in solvation energies using surrogate models based on generalized polynomial chaos (gPC) expansions. There are relatively few atom types used to specify radii parameters in implicit solvation calculations; therefore, surrogate models for these low-dimensional spaces could be constructed using least-squares fitting. However, there are many moremore » types of atomic charges; therefore, construction of surrogate models for the charge parameter space required compressed sensing combined with an iterative rotation method to enhance problem sparsity. We present results for the uncertainty in small molecule solvation energies based on these approaches. Additionally, we explore the correlation between uncertainties due to radii and charges which motivates the need for future work in uncertainty quantification methods for high-dimensional parameter spaces.« less

  14. Computing the Partial Fraction Decomposition of Rational Functions with Irreducible Quadratic Factors in the Denominators

    ERIC Educational Resources Information Center

    Man, Yiu-Kwong

    2012-01-01

    In this note, a new method for computing the partial fraction decomposition of rational functions with irreducible quadratic factors in the denominators is presented. This method involves polynomial divisions and substitutions only, without having to solve for the complex roots of the irreducible quadratic polynomial or to solve a system of linear…

  15. Interpolation Hermite Polynomials For Finite Element Method

    NASA Astrophysics Data System (ADS)

    Gusev, Alexander; Vinitsky, Sergue; Chuluunbaatar, Ochbadrakh; Chuluunbaatar, Galmandakh; Gerdt, Vladimir; Derbov, Vladimir; Góźdź, Andrzej; Krassovitskiy, Pavel

    2018-02-01

    We describe a new algorithm for analytic calculation of high-order Hermite interpolation polynomials of the simplex and give their classification. A typical example of triangle element, to be built in high accuracy finite element schemes, is given.

  16. A general method for computing Tutte polynomials of self-similar graphs

    NASA Astrophysics Data System (ADS)

    Gong, Helin; Jin, Xian'an

    2017-10-01

    Self-similar graphs were widely studied in both combinatorics and statistical physics. Motivated by the construction of the well-known 3-dimensional Sierpiński gasket graphs, in this paper we introduce a family of recursively constructed self-similar graphs whose inner duals are of the self-similar property. By combining the dual property of the Tutte polynomial and the subgraph-decomposition trick, we show that the Tutte polynomial of this family of graphs can be computed in an iterative way and in particular the exact expression of the formula of the number of their spanning trees is derived. Furthermore, we show our method is a general one that is easily extended to compute Tutte polynomials for other families of self-similar graphs such as Farey graphs, 2-dimensional Sierpiński gasket graphs, Hanoi graphs, modified Koch graphs, Apollonian graphs, pseudofractal scale-free web, fractal scale-free network, etc.

  17. Development and Evaluation of a Hydrostatic Dynamical Core Using the Spectral Element/Discontinuous Galerkin Methods

    DTIC Science & Technology

    2014-04-01

    The CG and DG horizontal discretization employs high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto- Legendre ...and DG horizontal discretization employs high-order nodal basis functions 29 associated with Lagrange polynomials based on Gauss-Lobatto- Legendre ...Inside 235 each element we build ( 1)N + Gauss-Lobatto- Legendre (GLL) quadrature points, where N 236 indicate the polynomial order of the basis

  18. A harmonic polynomial cell (HPC) method for 3D Laplace equation with application in marine hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Yan-Lin, E-mail: yanlin.shao@dnvgl.com; Faltinsen, Odd M.

    2014-10-01

    We propose a new efficient and accurate numerical method based on harmonic polynomials to solve boundary value problems governed by 3D Laplace equation. The computational domain is discretized by overlapping cells. Within each cell, the velocity potential is represented by the linear superposition of a complete set of harmonic polynomials, which are the elementary solutions of Laplace equation. By its definition, the method is named as Harmonic Polynomial Cell (HPC) method. The characteristics of the accuracy and efficiency of the HPC method are demonstrated by studying analytical cases. Comparisons will be made with some other existing boundary element based methods,more » e.g. Quadratic Boundary Element Method (QBEM) and the Fast Multipole Accelerated QBEM (FMA-QBEM) and a fourth order Finite Difference Method (FDM). To demonstrate the applications of the method, it is applied to some studies relevant for marine hydrodynamics. Sloshing in 3D rectangular tanks, a fully-nonlinear numerical wave tank, fully-nonlinear wave focusing on a semi-circular shoal, and the nonlinear wave diffraction of a bottom-mounted cylinder in regular waves are studied. The comparisons with the experimental results and other numerical results are all in satisfactory agreement, indicating that the present HPC method is a promising method in solving potential-flow problems. The underlying procedure of the HPC method could also be useful in other fields than marine hydrodynamics involved with solving Laplace equation.« less

  19. Fourier-Legendre expansion of the one-electron density matrix of ground-state two-electron atoms.

    PubMed

    Ragot, Sébastien; Ruiz, María Belén

    2008-09-28

    The density matrix rho(r,r(')) of a spherically symmetric system can be expanded as a Fourier-Legendre series of Legendre polynomials P(l)(cos theta=rr(')rr(')). Application is here made to harmonically trapped electron pairs (i.e., Moshinsky's and Hooke's atoms), for which exact wavefunctions are known, and to the helium atom, using a near-exact wavefunction. In the present approach, generic closed form expressions are derived for the series coefficients of rho(r,r(')). The series expansions are shown to converge rapidly in each case, with respect to both the electron number and the kinetic energy. In practice, a two-term expansion accounts for most of the correlation effects, so that the correlated density matrices of the atoms at issue are essentially a linear functions of P(l)(cos theta)=cos theta. For example, in the case of Hooke's atom, a two-term expansion takes in 99.9% of the electrons and 99.6% of the kinetic energy. The correlated density matrices obtained are finally compared to their determinantal counterparts, using a simplified representation of the density matrix rho(r,r(')), suggested by the Legendre expansion. Interestingly, two-particle correlation is shown to impact the angular delocalization of each electron, in the one-particle space spanned by the r and r(') variables.

  20. Minimum Sobolev norm interpolation of scattered derivative data

    NASA Astrophysics Data System (ADS)

    Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.

    2018-07-01

    We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.

  1. Symmetric digit sets for elliptic curve scalar multiplication without precomputation

    PubMed Central

    Heuberger, Clemens; Mazzoli, Michela

    2014-01-01

    We describe a method to perform scalar multiplication on two classes of ordinary elliptic curves, namely E:y2=x3+Ax in prime characteristic p≡1mod4, and E:y2=x3+B in prime characteristic p≡1mod3. On these curves, the 4-th and 6-th roots of unity act as (computationally efficient) endomorphisms. In order to optimise the scalar multiplication, we consider a width-w-NAF (Non-Adjacent Form) digit expansion of positive integers to the complex base of τ, where τ is a zero of the characteristic polynomial x2−tx+p of the Frobenius endomorphism associated to the curve. We provide a precomputationless algorithm by means of a convenient factorisation of the unit group of residue classes modulo τ in the endomorphism ring, whereby we construct a digit set consisting of powers of subgroup generators, which are chosen as efficient endomorphisms of the curve. PMID:25190900

  2. Concerning the Development of the Wide-Field Optics for WFXT Including Methods of Optimizing X-Ray Optical Prescriptions for Wide-Field Applications

    NASA Technical Reports Server (NTRS)

    Weisskopf, M. C.; Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.

    2010-01-01

    We present a progress report on the various endeavors we are undertaking at MSFC in support of the Wide Field X-Ray Telescope development. In particular we discuss assembly and alignment techniques, in-situ polishing corrections, and the results of our efforts to optimize mirror prescriptions including polynomial coefficients, relative shell displacements, detector placements and tilts. This optimization does not require a blind search through the multi-dimensional parameter space. Under the assumption that the parameters are small enough so that second order expansions are valid, we show that the performance at the detector can be expressed as a quadratic function with numerical coefficients derived from a ray trace through the underlying Wolter I optic. The optimal values for the parameters are found by solving the linear system of equations creating by setting derivatives of this function with respect to each parameter to zero.

  3. Nonlinear channel equalization for QAM signal constellation using artificial neural networks.

    PubMed

    Patra, J C; Pal, R N; Baliarsingh, R; Panda, G

    1999-01-01

    Application of artificial neural networks (ANN's) to adaptive channel equalization in a digital communication system with 4-QAM signal constellation is reported in this paper. A novel computationally efficient single layer functional link ANN (FLANN) is proposed for this purpose. This network has a simple structure in which the nonlinearity is introduced by functional expansion of the input pattern by trigonometric polynomials. Because of input pattern enhancement, the FLANN is capable of forming arbitrarily nonlinear decision boundaries and can perform complex pattern classification tasks. Considering channel equalization as a nonlinear classification problem, the FLANN has been utilized for nonlinear channel equalization. The performance of the FLANN is compared with two other ANN structures [a multilayer perceptron (MLP) and a polynomial perceptron network (PPN)] along with a conventional linear LMS-based equalizer for different linear and nonlinear channel models. The effect of eigenvalue ratio (EVR) of input correlation matrix on the equalizer performance has been studied. The comparison of computational complexity involved for the three ANN structures is also provided.

  4. Additive-Multiplicative Approximation of Genotype-Environment Interaction

    PubMed Central

    Gimelfarb, A.

    1994-01-01

    A model of genotype-environment interaction in quantitative traits is considered. The model represents an expansion of the traditional additive (first degree polynomial) approximation of genotypic and environmental effects to a second degree polynomial incorporating a multiplicative term besides the additive terms. An experimental evaluation of the model is suggested and applied to a trait in Drosophila melanogaster. The environmental variance of a genotype in the model is shown to be a function of the genotypic value: it is a convex parabola. The broad sense heritability in a population depends not only on the genotypic and environmental variances, but also on the position of the genotypic mean in the population relative to the minimum of the parabola. It is demonstrated, using the model, that GXE interaction rectional may cause a substantial non-linearity in offspring-parent regression and a reversed response to directional selection. It is also shown that directional selection may be accompanied by an increase in the heritability. PMID:7896113

  5. Galaxy halo expansions: a new biorthogonal family of potential-density pairs

    NASA Astrophysics Data System (ADS)

    Lilley, Edward J.; Sanders, Jason L.; Evans, N. Wyn; Erkal, Denis

    2018-05-01

    Efficient expansions of the gravitational field of (dark) haloes have two main uses in the modelling of galaxies: first, they provide a compact representation of numerically constructed (or real) cosmological haloes, incorporating the effects of triaxiality, lopsidedness or other distortion. Secondly, they provide the basis functions for self-consistent field expansion algorithms used in the evolution of N-body systems. We present a new family of biorthogonal potential-density pairs constructed using the Hankel transform of the Laguerre polynomials. The lowest order density basis functions are double-power-law profiles cusped like ρ ˜ r-2+1/α at small radii with asymptotic density fall-off like ρ ˜ r-3-1/(2α). Here, α is a parameter satisfying α ≥ 1/2. The family therefore spans the range of inner density cusps found in numerical simulations, but has much shallower - and hence more realistic - outer slopes than the corresponding members of the only previously known family deduced by Zhao and exemplified by Hernquist & Ostriker. When α = 1, the lowest order density profile has an inner density cusp of ρ ˜ r-1 and an outer density slope of ρ ˜ r-3.5, similar to the famous Navarro, Frenk & White (NFW) model. For this reason, we demonstrate that our new expansion provides a more accurate representation of flattened NFW haloes than the competing Hernquist-Ostriker expansion. We utilize our new expansion by analysing a suite of numerically constructed haloes and providing the distributions of the expansion coefficients.

  6. Polynomial modal analysis of lamellar diffraction gratings in conical mounting.

    PubMed

    Randriamihaja, Manjakavola Honore; Granet, Gérard; Edee, Kofi; Raniriharinosy, Karyl

    2016-09-01

    An efficient numerical modal method for modeling a lamellar grating in conical mounting is presented. Within each region of the grating, the electromagnetic field is expanded onto Legendre polynomials, which allows us to enforce in an exact manner the boundary conditions that determine the eigensolutions. Our code is successfully validated by comparison with results obtained with the analytical modal method.

  7. A contracting-interval program for the Danilewski method. Ph.D. Thesis - Va. Univ.

    NASA Technical Reports Server (NTRS)

    Harris, J. D.

    1971-01-01

    The concept of contracting-interval programs is applied to finding the eigenvalues of a matrix. The development is a three-step process in which (1) a program is developed for the reduction of a matrix to Hessenberg form, (2) a program is developed for the reduction of a Hessenberg matrix to colleague form, and (3) the characteristic polynomial with interval coefficients is readily obtained from the interval of colleague matrices. This interval polynomial is then factored into quadratic factors so that the eigenvalues may be obtained. To develop a contracting-interval program for factoring this polynomial with interval coefficients it is necessary to have an iteration method which converges even in the presence of controlled rounding errors. A theorem is stated giving sufficient conditions for the convergence of Newton's method when both the function and its Jacobian cannot be evaluated exactly but errors can be made proportional to the square of the norm of the difference between the previous two iterates. This theorem is applied to prove the convergence of the generalization of the Newton-Bairstow method that is used to obtain quadratic factors of the characteristic polynomial.

  8. Correcting bias in the rational polynomial coefficients of satellite imagery using thin-plate smoothing splines

    NASA Astrophysics Data System (ADS)

    Shen, Xiang; Liu, Bin; Li, Qing-Quan

    2017-03-01

    The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates that the new method can be more effective at removing systematic biases in vendor-supplied RPCs.

  9. Distortion theorems for polynomials on a circle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubinin, V N

    2000-12-31

    Inequalities for the derivatives with respect to {phi}=arg z the functions ReP(z), |P(z)|{sup 2} and arg P(z) are established for an algebraic polynomial P(z) at points on the circle |z|=1. These estimates depend, in particular, on the constant term and the leading coefficient of the polynomial P(z) and improve the classical Bernstein and Turan inequalities. The method of proof is based on the techniques of generalized reduced moduli.

  10. Towards robust quantification and reduction of uncertainty in hydrologic predictions: Integration of particle Markov chain Monte Carlo and factorial polynomial chaos expansion

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Baetz, B. W.; Ancell, B. C.

    2017-05-01

    The particle filtering techniques have been receiving increasing attention from the hydrologic community due to its ability to properly estimate model parameters and states of nonlinear and non-Gaussian systems. To facilitate a robust quantification of uncertainty in hydrologic predictions, it is necessary to explicitly examine the forward propagation and evolution of parameter uncertainties and their interactions that affect the predictive performance. This paper presents a unified probabilistic framework that merges the strengths of particle Markov chain Monte Carlo (PMCMC) and factorial polynomial chaos expansion (FPCE) algorithms to robustly quantify and reduce uncertainties in hydrologic predictions. A Gaussian anamorphosis technique is used to establish a seamless bridge between the data assimilation using the PMCMC and the uncertainty propagation using the FPCE through a straightforward transformation of posterior distributions of model parameters. The unified probabilistic framework is applied to the Xiangxi River watershed of the Three Gorges Reservoir (TGR) region in China to demonstrate its validity and applicability. Results reveal that the degree of spatial variability of soil moisture capacity is the most identifiable model parameter with the fastest convergence through the streamflow assimilation process. The potential interaction between the spatial variability in soil moisture conditions and the maximum soil moisture capacity has the most significant effect on the performance of streamflow predictions. In addition, parameter sensitivities and interactions vary in magnitude and direction over time due to temporal and spatial dynamics of hydrologic processes.

  11. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  12. Tensor calculus in polar coordinates using Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Vasil, Geoffrey M.; Burns, Keaton J.; Lecoanet, Daniel; Olver, Sheehan; Brown, Benjamin P.; Oishi, Jeffrey S.

    2016-11-01

    Spectral methods are an efficient way to solve partial differential equations on domains possessing certain symmetries. The utility of a method depends strongly on the choice of spectral basis. In this paper we describe a set of bases built out of Jacobi polynomials, and associated operators for solving scalar, vector, and tensor partial differential equations in polar coordinates on a unit disk. By construction, the bases satisfy regularity conditions at r = 0 for any tensorial field. The coordinate singularity in a disk is a prototypical case for many coordinate singularities. The work presented here extends to other geometries. The operators represent covariant derivatives, multiplication by azimuthally symmetric functions, and the tensorial relationship between fields. These arise naturally from relations between classical orthogonal polynomials, and form a Heisenberg algebra. Other past work uses more specific polynomial bases for solving equations in polar coordinates. The main innovation in this paper is to use a larger set of possible bases to achieve maximum bandedness of linear operations. We provide a series of applications of the methods, illustrating their ease-of-use and accuracy.

  13. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  14. On Everhart Method

    NASA Astrophysics Data System (ADS)

    Pârv, Bazil

    This paper deals with the Everhart numerical integration method, a well-known method in astronomical research. This method, a single-step one, is widely used for numerical integration of motion equation of celestial bodies. For an integration step, this method uses unequally-spaced substeps, defined by the roots of the so-called generating polynomial of Everhart's method. For this polynomial, this paper proposes and proves new recurrence formulae. The Maple computer algebra system was used to find and prove these formulae. Again, Maple seems to be well suited and easy to use in mathematical research.

  15. A method for deriving lower bounds for the complexity of monotone arithmetic circuits computing real polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gashkov, Sergey B; Sergeev, Igor' S

    2012-10-31

    This work suggests a method for deriving lower bounds for the complexity of polynomials with positive real coefficients implemented by circuits of functional elements over the monotone arithmetic basis {l_brace}x+y, x {center_dot} y{r_brace} Union {l_brace}a {center_dot} x | a Element-Of R{sub +}{r_brace}. Using this method, several new results are obtained. In particular, we construct examples of polynomials of degree m-1 in each of the n variables with coefficients 0 and 1 having additive monotone complexity m{sup (1-o(1))n} and multiplicative monotone complexity m{sup (1/2-o(1))n} as m{sup n}{yields}{infinity}. In this form, the lower bounds derived here are sharp. Bibliography: 72 titles.

  16. Analysis on the misalignment errors between Hartmann-Shack sensor and 45-element deformable mirror

    NASA Astrophysics Data System (ADS)

    Liu, Lihui; Zhang, Yi; Tao, Jianjun; Cao, Fen; Long, Yin; Tian, Pingchuan; Chen, Shangwu

    2017-02-01

    Aiming at 45-element adaptive optics system, the model of 45-element deformable mirror is truly built by COMSOL Multiphysics, and every actuator's influence function is acquired by finite element method. The process of this system correcting optical aberration is simulated by making use of procedure, and aiming for Strehl ratio of corrected diffraction facula, in the condition of existing different translation and rotation error between Hartmann-Shack sensor and deformable mirror, the system's correction ability for 3-20 Zernike polynomial wave aberration is analyzed. The computed result shows: the system's correction ability for 3-9 Zernike polynomial wave aberration is higher than that of 10-20 Zernike polynomial wave aberration. The correction ability for 3-20 Zernike polynomial wave aberration does not change with misalignment error changing. With rotation error between Hartmann-Shack sensor and deformable mirror increasing, the correction ability for 3-20 Zernike polynomial wave aberration gradually goes down, and with translation error increasing, the correction ability for 3-9 Zernike polynomial wave aberration gradually goes down, but the correction ability for 10-20 Zernike polynomial wave aberration behave up-and-down depression.

  17. Stability analysis of fuzzy parametric uncertain systems.

    PubMed

    Bhiwani, R J; Patre, B M

    2011-10-01

    In this paper, the determination of stability margin, gain and phase margin aspects of fuzzy parametric uncertain systems are dealt. The stability analysis of uncertain linear systems with coefficients described by fuzzy functions is studied. A complexity reduced technique for determining the stability margin for FPUS is proposed. The method suggested is dependent on the order of the characteristic polynomial. In order to find the stability margin of interval polynomials of order less than 5, it is not always necessary to determine and check all four Kharitonov's polynomials. It has been shown that, for determining stability margin of FPUS of order five, four, and three we require only 3, 2, and 1 Kharitonov's polynomials respectively. Only for sixth and higher order polynomials, a complete set of Kharitonov's polynomials are needed to determine the stability margin. Thus for lower order systems, the calculations are reduced to a large extent. This idea has been extended to determine the stability margin of fuzzy interval polynomials. It is also shown that the gain and phase margin of FPUS can be determined analytically without using graphical techniques. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  18. The sensitivity of catchment hypsometry and hypsometric properties to DEM resolution and polynomial order

    NASA Astrophysics Data System (ADS)

    Liffner, Joel W.; Hewa, Guna A.; Peel, Murray C.

    2018-05-01

    Derivation of the hypsometric curve of a catchment, and properties relating to that curve, requires both use of topographical data (commonly in the form of a Digital Elevation Model - DEM), and the estimation of a functional representation of that curve. An early investigation into catchment hypsometry concluded 3rd order polynomials sufficiently describe the hypsometric curve, without the consideration of higher order polynomials, or the sensitivity of hypsometric properties relating to the curve. Another study concluded the hypsometric integral (HI) is robust against changes in DEM resolution, a conclusion drawn from a very limited sample size. Conclusions from these earlier studies have resulted in the adoption of methods deemed to be "sufficient" in subsequent studies, in addition to assumptions that the robustness of the HI extends to other hypsometric properties. This study investigates and demonstrates the sensitivity of hypsometric properties to DEM resolution, DEM type and polynomial order through assessing differences in hypsometric properties derived from 417 catchments and sub-catchments within South Australia. The sensitivity of hypsometric properties across DEM types and polynomial orders is found to be significant, which suggests careful consideration of the methods chosen to derive catchment hypsometric information is required.

  19. A Novel Polygonal Finite Element Method: Virtual Node Method

    NASA Astrophysics Data System (ADS)

    Tang, X. H.; Zheng, C.; Zhang, J. H.

    2010-05-01

    Polygonal finite element method (PFEM), which can construct shape functions on polygonal elements, provides greater flexibility in mesh generation. However, the non-polynomial form of traditional PFEM, such as Wachspress method and Mean Value method, leads to inexact numerical integration. Since the integration technique for non-polynomial functions is immature. To overcome this shortcoming, a great number of integration points have to be used to obtain sufficiently exact results, which increases computational cost. In this paper, a novel polygonal finite element method is proposed and called as virtual node method (VNM). The features of present method can be list as: (1) It is a PFEM with polynomial form. Thereby, Hammer integral and Gauss integral can be naturally used to obtain exact numerical integration; (2) Shape functions of VNM satisfy all the requirements of finite element method. To test the performance of VNM, intensive numerical tests are carried out. It found that, in standard patch test, VNM can achieve significantly better results than Wachspress method and Mean Value method. Moreover, it is observed that VNM can achieve better results than triangular 3-node elements in the accuracy test.

  20. Integrated thermal disturbance analysis of optical system of astronomical telescope

    NASA Astrophysics Data System (ADS)

    Yang, Dehua; Jiang, Zibo; Li, Xinnan

    2008-07-01

    During operation, astronomical telescope will undergo thermal disturbance, especially more serious in solar telescope, which may cause degradation of image quality. As drives careful thermal load investigation and measure applied to assess its effect on final image quality during design phase. Integrated modeling analysis is boosting the process to find comprehensive optimum design scheme by software simulation. In this paper, we focus on the Finite Element Analysis (FEA) software-ANSYS-for thermal disturbance analysis and the optical design software-ZEMAX-for optical system design. The integrated model based on ANSYS and ZEMAX is briefed in the first from an overview of point. Afterwards, we discuss the establishment of thermal model. Complete power series polynomial with spatial coordinates is introduced to present temperature field analytically. We also borrow linear interpolation technique derived from shape function in finite element theory to interface the thermal model and structural model and further to apply the temperatures onto structural model nodes. Thereby, the thermal loads are transferred with as high fidelity as possible. Data interface and communication between the two softwares are discussed mainly on mirror surfaces and hence on the optical figure representation and transformation. We compare and comment the two different methods, Zernike polynomials and power series expansion, for representing and transforming deformed optical surface to ZEMAX. Additionally, these methods applied to surface with non-circular aperture are discussed. At the end, an optical telescope with parabolic primary mirror of 900 mm in diameter is analyzed to illustrate the above discussion. Finite Element Model with most interested parts of the telescope is generated in ANSYS with necessary structural simplification and equivalence. Thermal analysis is performed and the resulted positions and figures of the optics are to be retrieved and transferred to ZEMAX, and thus final image quality is evaluated with thermal disturbance.

  1. On the Rate of Relaxation for the Landau Kinetic Equation and Related Models

    NASA Astrophysics Data System (ADS)

    Bobylev, Alexander; Gamba, Irene M.; Zhang, Chenglong

    2017-08-01

    We study the rate of relaxation to equilibrium for Landau kinetic equation and some related models by considering the relatively simple case of radial solutions of the linear Landau-type equations. The well-known difficulty is that the evolution operator has no spectral gap, i.e. its spectrum is not separated from zero. Hence we do not expect purely exponential relaxation for large values of time t>0. One of the main goals of our work is to numerically identify the large time asymptotics for the relaxation to equilibrium. We recall the work of Strain and Guo (Arch Rat Mech Anal 187:287-339 2008, Commun Partial Differ Equ 31:17-429 2006), who rigorously show that the expected law of relaxation is \\exp (-ct^{2/3}) with some c > 0. In this manuscript, we find an heuristic way, performed by asymptotic methods, that finds this "law of two thirds", and then study this question numerically. More specifically, the linear Landau equation is approximated by a set of ODEs based on expansions in generalized Laguerre polynomials. We analyze the corresponding quadratic form and the solution of these ODEs in detail. It is shown that the solution has two different asymptotic stages for large values of time t and maximal order of polynomials N: the first one focus on intermediate asymptotics which agrees with the "law of two thirds" for moderately large values of time t and then the second one on absolute, purely exponential asymptotics for very large t, as expected for linear ODEs. We believe that appearance of intermediate asymptotics in finite dimensional approximations must be a generic behavior for different classes of equations in functional spaces (some PDEs, Boltzmann equations for soft potentials, etc.) and that our methods can be applied to related problems.

  2. A polynomial chaos ensemble hydrologic prediction system for efficient parameter inference and robust uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Baetz, B. W.; Huang, W.

    2015-11-01

    This paper presents a polynomial chaos ensemble hydrologic prediction system (PCEHPS) for an efficient and robust uncertainty assessment of model parameters and predictions, in which possibilistic reasoning is infused into probabilistic parameter inference with simultaneous consideration of randomness and fuzziness. The PCEHPS is developed through a two-stage factorial polynomial chaos expansion (PCE) framework, which consists of an ensemble of PCEs to approximate the behavior of the hydrologic model, significantly speeding up the exhaustive sampling of the parameter space. Multiple hypothesis testing is then conducted to construct an ensemble of reduced-dimensionality PCEs with only the most influential terms, which is meaningful for achieving uncertainty reduction and further acceleration of parameter inference. The PCEHPS is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability. A detailed comparison between the HYMOD hydrologic model, the ensemble of PCEs, and the ensemble of reduced PCEs is performed in terms of accuracy and efficiency. Results reveal temporal and spatial variations in parameter sensitivities due to the dynamic behavior of hydrologic systems, and the effects (magnitude and direction) of parametric interactions depending on different hydrological metrics. The case study demonstrates that the PCEHPS is capable not only of capturing both expert knowledge and probabilistic information in the calibration process, but also of implementing an acceleration of more than 10 times faster than the hydrologic model without compromising the predictive accuracy.

  3. Rational approximations of f(R) cosmography through Pad'e polynomials

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando

    2018-05-01

    We consider high-redshift f(R) cosmography adopting the technique of polynomial reconstruction. In lieu of considering Taylor treatments, which turn out to be non-predictive as soon as z>1, we take into account the Pad&apose rational approximations which consist in performing expansions converging at high redshift domains. Particularly, our strategy is to reconstruct f(z) functions first, assuming the Ricci scalar to be invertible with respect to the redshift z. Having the so-obtained f(z) functions, we invert them and we easily obtain the corresponding f(R) terms. We minimize error propagation, assuming no errors upon redshift data. The treatment we follow naturally leads to evaluating curvature pressure, density and equation of state, characterizing the universe evolution at redshift much higher than standard cosmographic approaches. We therefore match these outcomes with small redshift constraints got by framing the f(R) cosmology through Taylor series around 0zsimeq . This gives rise to a calibration procedure with small redshift that enables the definitions of polynomial approximations up to zsimeq 10. Last but not least, we show discrepancies with the standard cosmological model which go towards an extension of the ΛCDM paradigm, indicating an effective dark energy term evolving in time. We finally describe the evolution of our effective dark energy term by means of basic techniques of data mining.

  4. Vector-valued Jack polynomials and wavefunctions on the torus

    NASA Astrophysics Data System (ADS)

    Dunkl, Charles F.

    2017-06-01

    The Hamiltonian of the quantum Calogero-Sutherland model of N identical particles on the circle with 1/r 2 interactions has eigenfunctions consisting of Jack polynomials times the base state. By use of the generalized Jack polynomials taking values in modules of the symmetric group and the matrix solution of a system of linear differential equations one constructs novel eigenfunctions of the Hamiltonian. Like the usual wavefunctions each eigenfunction determines a symmetric probability density on the N-torus. The construction applies to any irreducible representation of the symmetric group. The methods depend on the theory of generalized Jack polynomials due to Griffeth, and the Yang-Baxter graph approach of Luque and the author.

  5. Impacts of Sigma Coordinates on the Euler and Navier-Stokes Equations using Continuous Galerkin Methods

    DTIC Science & Technology

    2009-03-01

    the 1- D local basis functions. The 1-D Lagrange polynomial local basis function, using Legendre -Gauss-Lobatto interpolation points, was defined by...cases were run using 10th order polynomials , with contours from -0.05 to 0.525 K with an interval of 0.025 K...after 700 s for reso- lutions: (a) 20, (b) 10, and (c) 5 m. All cases were run using 10th order polynomials , with contours from -0.05 to 0.525 K

  6. Distortion theorems for polynomials on a circle

    NASA Astrophysics Data System (ADS)

    Dubinin, V. N.

    2000-12-01

    Inequalities for the derivatives with respect to \\varphi=\\arg z the functions \\operatorname{Re}P(z), \\vert P(z)\\vert^2 and \\arg P(z) are established for an algebraic polynomial P(z) at points on the circle \\vert z\\vert=1. These estimates depend, in particular, on the constant term and the leading coefficient of the polynomial P(z) and improve the classical Bernstein and Turan inequalities. The method of proof is based on the techniques of generalized reduced moduli.

  7. A comparison of companion matrix methods to find roots of a trigonometric polynomial

    NASA Astrophysics Data System (ADS)

    Boyd, John P.

    2013-08-01

    A trigonometric polynomial is a truncated Fourier series of the form fN(t)≡∑j=0Naj cos(jt)+∑j=1N bj sin(jt). It has been previously shown by the author that zeros of such a polynomial can be computed as the eigenvalues of a companion matrix with elements which are complex valued combinations of the Fourier coefficients, the "CCM" method. However, previous work provided no examples, so one goal of this new work is to experimentally test the CCM method. A second goal is introduce a new alternative, the elimination/Chebyshev algorithm, and experimentally compare it with the CCM scheme. The elimination/Chebyshev matrix (ECM) algorithm yields a companion matrix with real-valued elements, albeit at the price of usefulness only for real roots. The new elimination scheme first converts the trigonometric rootfinding problem to a pair of polynomial equations in the variables (c,s) where c≡cos(t) and s≡sin(t). The elimination method next reduces the system to a single univariate polynomial P(c). We show that this same polynomial is the resultant of the system and is also a generator of the Groebner basis with lexicographic ordering for the system. Both methods give very high numerical accuracy for real-valued roots, typically at least 11 decimal places in Matlab/IEEE 754 16 digit floating point arithmetic. The CCM algorithm is typically one or two decimal places more accurate, though these differences disappear if the roots are "Newton-polished" by a single Newton's iteration. The complex-valued matrix is accurate for complex-valued roots, too, though accuracy decreases with the magnitude of the imaginary part of the root. The cost of both methods scales as O(N3) floating point operations. In spite of intimate connections of the elimination/Chebyshev scheme to two well-established technologies for solving systems of equations, resultants and Groebner bases, and the advantages of using only real-valued arithmetic to obtain a companion matrix with real-valued elements, the ECM algorithm is noticeably inferior to the complex-valued companion matrix in simplicity, ease of programming, and accuracy.

  8. Piecewise polynomial representations of genomic tracks.

    PubMed

    Tarabichi, Maxime; Detours, Vincent; Konopka, Tomasz

    2012-01-01

    Genomic data from micro-array and sequencing projects consist of associations of measured values to chromosomal coordinates. These associations can be thought of as functions in one dimension and can thus be stored, analyzed, and interpreted as piecewise-polynomial curves. We present a general framework for building piecewise polynomial representations of genome-scale signals and illustrate some of its applications via examples. We show that piecewise constant segmentation, a typical step in copy-number analyses, can be carried out within this framework for both array and (DNA) sequencing data offering advantages over existing methods in each case. Higher-order polynomial curves can be used, for example, to detect trends and/or discontinuities in transcription levels from RNA-seq data. We give a concrete application of piecewise linear functions to diagnose and quantify alignment quality at exon borders (splice sites). Our software (source and object code) for building piecewise polynomial models is available at http://sourceforge.net/projects/locsmoc/.

  9. New Formulae for the High-Order Derivatives of Some Jacobi Polynomials: An Application to Some High-Order Boundary Value Problems

    PubMed Central

    Abd-Elhameed, W. M.

    2014-01-01

    This paper is concerned with deriving some new formulae expressing explicitly the high-order derivatives of Jacobi polynomials whose parameters difference is one or two of any degree and of any order in terms of their corresponding Jacobi polynomials. The derivatives formulae for Chebyshev polynomials of third and fourth kinds of any degree and of any order in terms of their corresponding Chebyshev polynomials are deduced as special cases. Some new reduction formulae for summing some terminating hypergeometric functions of unit argument are also deduced. As an application, and with the aid of the new introduced derivatives formulae, an algorithm for solving special sixth-order boundary value problems are implemented with the aid of applying Galerkin method. A numerical example is presented hoping to ascertain the validity and the applicability of the proposed algorithms. PMID:25386599

  10. Polynomial elimination theory and non-linear stability analysis for the Euler equations

    NASA Technical Reports Server (NTRS)

    Kennon, S. R.; Dulikravich, G. S.; Jespersen, D. C.

    1986-01-01

    Numerical methods are presented that exploit the polynomial properties of discretizations of the Euler equations. It is noted that most finite difference or finite volume discretizations of the steady-state Euler equations produce a polynomial system of equations to be solved. These equations are solved using classical polynomial elimination theory, with some innovative modifications. This paper also presents some preliminary results of a new non-linear stability analysis technique. This technique is applicable to determining the stability of polynomial iterative schemes. Results are presented for applying the elimination technique to a one-dimensional test case. For this test case, the exact solution is computed in three iterations. The non-linear stability analysis is applied to determine the optimal time step for solving Burgers' equation using the MacCormack scheme. The estimated optimal time step is very close to the time step that arises from a linear stability analysis.

  11. An Online Gravity Modeling Method Applied for High Precision Free-INS

    PubMed Central

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-01-01

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS. PMID:27669261

  12. An Online Gravity Modeling Method Applied for High Precision Free-INS.

    PubMed

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-09-23

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS.

  13. Measurement of the n-p elastic scattering angular distribution at E{sub n}=14.9 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boukharouba, N.; Bateman, F. B.; Carlson, A. D.

    2010-07-15

    The relative differential cross section for the elastic scattering of neutrons by protons was measured at an incident neutron energy E{sub n}=14.9 MeV and for center-of-mass scattering angles ranging from about 60 deg. to 180 deg. Angular distribution values were obtained from the normalization of the integrated data to the n-p total elastic scattering cross section. Comparisons of the normalized data to the predictions of the Arndt et al. phase-shift analysis, those of the Nijmegen group, and with the ENDF/B-VII.0 evaluation are sensitive to the value of the total elastic scattering cross section used to normalize the data. The resultsmore » of a fit to a first-order Legendre polynomial expansion are in good agreement in the backward scattering hemisphere with the predictions of the Arndt et al. phase-shift analysis, those of the Nijmegen group, and to a lesser extent, with the ENDF/B-VII.0 evaluation. A fit to a second-order expansion is in better agreement with the ENDF/B-VII.0 evaluation than with the other predictions, in particular when the total elastic scattering cross section given by Arndt et al. and the Nijmegen group is used to normalize the data. A Legendre polynomial fit to the existing n-p scattering data in the 14 MeV energy region, excluding the present measurement, showed that a best fit is obtained for a second-order expansion. Furthermore, the Kolmogorov-Smirnov test confirms the general agreement in the backward scattering hemisphere and shows that significant differences between the database and the predictions occur in the angular range between 60 deg. and 120 deg. and below 20 deg. Although there is good overall agreement in the backward scattering hemisphere, more precision small-angle scattering data and a better definition of the total elastic cross section are needed for an accurate determination of the shape and magnitude of the angular distribution.« less

  14. Bicubic uniform B-spline wavefront fitting technology applied in computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Cao, Hui; Sun, Jun-qiang; Chen, Guo-jie

    2006-02-01

    This paper presented a bicubic uniform B-spline wavefront fitting technology to figure out the analytical expression for object wavefront used in Computer-Generated Holograms (CGHs). In many cases, to decrease the difficulty of optical processing, off-axis CGHs rather than complex aspherical surface elements are used in modern advanced military optical systems. In order to design and fabricate off-axis CGH, we have to fit out the analytical expression for object wavefront. Zernike Polynomial is competent for fitting wavefront of centrosymmetric optical systems, but not for axisymmetrical optical systems. Although adopting high-degree polynomials fitting method would achieve higher fitting precision in all fitting nodes, the greatest shortcoming of this method is that any departure from the fitting nodes would result in great fitting error, which is so-called pulsation phenomenon. Furthermore, high-degree polynomials fitting method would increase the calculation time in coding computer-generated hologram and solving basic equation. Basing on the basis function of cubic uniform B-spline and the character mesh of bicubic uniform B-spline wavefront, bicubic uniform B-spline wavefront are described as the product of a series of matrices. Employing standard MATLAB routines, four kinds of different analytical expressions for object wavefront are fitted out by bicubic uniform B-spline as well as high-degree polynomials. Calculation results indicate that, compared with high-degree polynomials, bicubic uniform B-spline is a more competitive method to fit out the analytical expression for object wavefront used in off-axis CGH, for its higher fitting precision and C2 continuity.

  15. Permutation invariant polynomial neural network approach to fitting potential energy surfaces. II. Four-atom systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jun; Jiang, Bin; Guo, Hua, E-mail: hguo@unm.edu

    2013-11-28

    A rigorous, general, and simple method to fit global and permutation invariant potential energy surfaces (PESs) using neural networks (NNs) is discussed. This so-called permutation invariant polynomial neural network (PIP-NN) method imposes permutation symmetry by using in its input a set of symmetry functions based on PIPs. For systems with more than three atoms, it is shown that the number of symmetry functions in the input vector needs to be larger than the number of internal coordinates in order to include both the primary and secondary invariant polynomials. This PIP-NN method is successfully demonstrated in three atom-triatomic reactive systems, resultingmore » in full-dimensional global PESs with average errors on the order of meV. These PESs are used in full-dimensional quantum dynamical calculations.« less

  16. Reachability Analysis in Probabilistic Biological Networks.

    PubMed

    Gabr, Haitham; Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2015-01-01

    Extra-cellular molecules trigger a response inside the cell by initiating a signal at special membrane receptors (i.e., sources), which is then transmitted to reporters (i.e., targets) through various chains of interactions among proteins. Understanding whether such a signal can reach from membrane receptors to reporters is essential in studying the cell response to extra-cellular events. This problem is drastically complicated due to the unreliability of the interaction data. In this paper, we develop a novel method, called PReach (Probabilistic Reachability), that precisely computes the probability that a signal can reach from a given collection of receptors to a given collection of reporters when the underlying signaling network is uncertain. This is a very difficult computational problem with no known polynomial-time solution. PReach represents each uncertain interaction as a bi-variate polynomial. It transforms the reachability problem to a polynomial multiplication problem. We introduce novel polynomial collapsing operators that associate polynomial terms with possible paths between sources and targets as well as the cuts that separate sources from targets. These operators significantly shrink the number of polynomial terms and thus the running time. PReach has much better time complexity than the recent solutions for this problem. Our experimental results on real data sets demonstrate that this improvement leads to orders of magnitude of reduction in the running time over the most recent methods. Availability: All the data sets used, the software implemented and the alignments found in this paper are available at http://bioinformatics.cise.ufl.edu/PReach/.

  17. Efficient computer algebra algorithms for polynomial matrices in control design

    NASA Technical Reports Server (NTRS)

    Baras, J. S.; Macenany, D. C.; Munach, R.

    1989-01-01

    The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.

  18. A two-step, fourth-order method with energy preserving properties

    NASA Astrophysics Data System (ADS)

    Brugnano, Luigi; Iavernaro, Felice; Trigiante, Donato

    2012-09-01

    We introduce a family of fourth-order two-step methods that preserve the energy function of canonical polynomial Hamiltonian systems. As is the case with linear mutistep and one-leg methods, a prerogative of the new formulae is that the associated nonlinear systems to be solved at each step of the integration procedure have the very same dimension of the underlying continuous problem. The key tools in the new methods are the line integral associated with a conservative vector field (such as the one defined by a Hamiltonian dynamical system) and its discretization obtained by the aid of a quadrature formula. Energy conservation is equivalent to the requirement that the quadrature is exact, which turns out to be always the case in the event that the Hamiltonian function is a polynomial and the degree of precision of the quadrature formula is high enough. The non-polynomial case is also discussed and a number of test problems are finally presented in order to compare the behavior of the new methods to the theoretical results.

  19. Design of reinforced areas of concrete column using quadratic polynomials

    NASA Astrophysics Data System (ADS)

    Arif Gunadi, Tjiang; Parung, Herman; Rachman Djamaluddin, Abd; Arwin Amiruddin, A.

    2017-11-01

    Designing of reinforced concrete columns mostly carried out by a simple planning method which uses column interaction diagram. However, the application of this method is limited because it valids only for certain compressive strenght of the concrete and yield strength of the reinforcement. Thus, a more applicable method is still in need. Another method is the use of quadratic polynomials as a basis for the approach in designing reinforced concrete columns, where the ratio of neutral lines to the effective height of a cross section (ξ) if associated with ξ in the same cross-section with different reinforcement ratios is assumed to form a quadratic polynomial. This is identical to the basic principle used in the Simpson rule for numerical integral using quadratic polynomials and had a sufficiently accurate level of accuracy. The basis of this approach to be used both the normal force equilibrium and the moment equilibrium. The abscissa of the intersection of the two curves is the ratio that had been mentioned, since it fulfill both of the equilibrium. The application of this method is relatively more complicated than the existing method but provided with tables and graphs (N vs ξN ) and (M vs ξM ) so that its used could be simplified. The uniqueness of these tables are only distinguished based on the compresssive strength of the concrete, so in application it could be combined with various yield strenght of the reinforcement available in the market. This method could be solved by using programming languages such as Fortran.

  20. Graph characterization via Ihara coefficients.

    PubMed

    Ren, Peng; Wilson, Richard C; Hancock, Edwin R

    2011-02-01

    The novel contributions of this paper are twofold. First, we demonstrate how to characterize unweighted graphs in a permutation-invariant manner using the polynomial coefficients from the Ihara zeta function, i.e., the Ihara coefficients. Second, we generalize the definition of the Ihara coefficients to edge-weighted graphs. For an unweighted graph, the Ihara zeta function is the reciprocal of a quasi characteristic polynomial of the adjacency matrix of the associated oriented line graph. Since the Ihara zeta function has poles that give rise to infinities, the most convenient numerically stable representation is to work with the coefficients of the quasi characteristic polynomial. Moreover, the polynomial coefficients are invariant to vertex order permutations and also convey information concerning the cycle structure of the graph. To generalize the representation to edge-weighted graphs, we make use of the reduced Bartholdi zeta function. We prove that the computation of the Ihara coefficients for unweighted graphs is a special case of our proposed method for unit edge weights. We also present a spectral analysis of the Ihara coefficients and indicate their advantages over other graph spectral methods. We apply the proposed graph characterization method to capturing graph-class structure and clustering graphs. Experimental results reveal that the Ihara coefficients are more effective than methods based on Laplacian spectra.

  1. Parametric analysis of ATM solar array.

    NASA Technical Reports Server (NTRS)

    Singh, B. K.; Adkisson, W. B.

    1973-01-01

    The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.

  2. Primary decomposition of zero-dimensional ideals over finite fields

    NASA Astrophysics Data System (ADS)

    Gao, Shuhong; Wan, Daqing; Wang, Mingsheng

    2009-03-01

    A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.

  3. Characterization of high order spatial discretizations and lumping techniques for discontinuous finite element SN transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maginot, P. G.; Ragusa, J. C.; Morel, J. E.

    2013-07-01

    We examine several possible methods of mass matrix lumping for discontinuous finite element discrete ordinates transport using a Lagrange interpolatory polynomial trial space. Though positive outflow angular flux is guaranteed with traditional mass matrix lumping in a purely absorbing 1-D slab cell for the linear discontinuous approximation, we show that when used with higher degree interpolatory polynomial trial spaces, traditional lumping does yield strictly positive outflows and does not increase in accuracy with an increase in trial space polynomial degree. As an alternative, we examine methods which are 'self-lumping'. Self-lumping methods yield diagonal mass matrices by using numerical quadrature restrictedmore » to the Lagrange interpolatory points. Using equally-spaced interpolatory points, self-lumping is achieved through the use of closed Newton-Cotes formulas, resulting in strictly positive outflows in pure absorbers for odd power polynomials in 1-D slab geometry. By changing interpolatory points from the traditional equally-spaced points to the quadrature points of the Gauss-Legendre or Lobatto-Gauss-Legendre quadratures, it is possible to generate solution representations with a diagonal mass matrix and a strictly positive outflow for any degree polynomial solution representation in a pure absorber medium in 1-D slab geometry. Further, there is no inherent limit to local truncation error order of accuracy when using interpolatory points that correspond to the quadrature points of high order accuracy numerical quadrature schemes. (authors)« less

  4. Calculation of the second term of the exact Green's function of the diffusion equation for diffusion-controlled chemical reactions

    NASA Astrophysics Data System (ADS)

    Plante, Ianik

    2016-01-01

    The exact Green's function of the diffusion equation (GFDE) is often considered to be the gold standard for the simulation of partially diffusion-controlled reactions. As the GFDE with angular dependency is quite complex, the radial GFDE is more often used. Indeed, the exact GFDE is expressed as a Legendre expansion, the coefficients of which are given in terms of an integral comprising Bessel functions. This integral does not seem to have been evaluated analytically in existing literature. While the integral can be evaluated numerically, the Bessel functions make the integral oscillate and convergence is difficult to obtain. Therefore it would be of great interest to evaluate the integral analytically. The first term was evaluated previously, and was found to be equal to the radial GFDE. In this work, the second term of this expansion was evaluated. As this work has shown that the first two terms of the Legendre polynomial expansion can be calculated analytically, it raises the question of the possibility that an analytical solution exists for the other terms.

  5. Development of new flux splitting schemes. [computational fluid dynamics algorithms

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1992-01-01

    Maximizing both accuracy and efficiency has been the primary objective in designing a numerical algorithm for computational fluid dynamics (CFD). This is especially important for solutions of complex three dimensional systems of Navier-Stokes equations which often include turbulence modeling and chemistry effects. Recently, upwind schemes have been well received for their capability in resolving discontinuities. With this in mind, presented are two new flux splitting techniques for upwind differencing. The first method is based on High-Order Polynomial Expansions (HOPE) of the mass flux vector. The second new flux splitting is based on the Advection Upwind Splitting Method (AUSM). The calculation of the hypersonic conical flow demonstrates the accuracy of the splitting in resolving the flow in the presence of strong gradients. A second series of tests involving the two dimensional inviscid flow over a NACA 0012 airfoil demonstrates the ability of the AUSM to resolve the shock discontinuity at transonic speed. A third case calculates a series of supersonic flows over a circular cylinder. Finally, the fourth case deals with tests of a two dimensional shock wave/boundary layer interaction.

  6. The discrete Toda equation revisited: dual β-Grothendieck polynomials, ultradiscretization, and static solitons

    NASA Astrophysics Data System (ADS)

    Iwao, Shinsuke; Nagai, Hidetomo

    2018-04-01

    This paper presents a study of the discrete Toda equation that was introduced in 1977. In this paper, it is proved that the determinantal solution of the discrete Toda equation, obtained via the Lax formalism, is naturally related to the dual Grothendieck polynomials, a K-theoretic generalization of the Schur polynomials. A tropical permanent solution to the ultradiscrete Toda equation is also derived. The proposed method gives a tropical algebraic representation of the static solitons. Lastly, a new cellular automaton realization of the ultradiscrete Toda equation is proposed.

  7. Myocardial strains from 3D displacement encoded magnetic resonance imaging

    PubMed Central

    2012-01-01

    Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE), make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts. PMID:22533791

  8. Exploring the limits of cryospectroscopy: Least-squares based approaches for analyzing the self-association of HCl

    NASA Astrophysics Data System (ADS)

    De Beuckeleer, Liene I.; Herrebout, Wouter A.

    2016-02-01

    To rationalize the concentration dependent behavior observed for a large spectral data set of HCl recorded in liquid argon, least-squares based numerical methods are developed and validated. In these methods, for each wavenumber a polynomial is used to mimic the relation between monomer concentrations and measured absorbances. Least-squares fitting of higher degree polynomials tends to overfit and thus leads to compensation effects where a contribution due to one species is compensated for by a negative contribution of another. The compensation effects are corrected for by carefully analyzing, using AIC and BIC information criteria, the differences observed between consecutive fittings when the degree of the polynomial model is systematically increased, and by introducing constraints prohibiting negative absorbances to occur for the monomer or for one of the oligomers. The method developed should allow other, more complicated self-associating systems to be analyzed with a much higher accuracy than before.

  9. Computing multiple periodic solutions of nonlinear vibration problems using the harmonic balance method and Groebner bases

    NASA Astrophysics Data System (ADS)

    Grolet, Aurelien; Thouverez, Fabrice

    2015-02-01

    This paper is devoted to the study of vibration of mechanical systems with geometric nonlinearities. The harmonic balance method is used to derive systems of polynomial equations whose solutions give the frequency component of the possible steady states. Groebner basis methods are used for computing all solutions of polynomial systems. This approach allows to reduce the complete system to an unique polynomial equation in one variable driving all solutions of the problem. In addition, in order to decrease the number of variables, we propose to first work on the undamped system, and recover solution of the damped system using a continuation on the damping parameter. The search for multiple solutions is illustrated on a simple system, where the influence of the retained number of harmonic is studied. Finally, the procedure is applied on a simple cyclic system and we give a representation of the multiple states versus frequency.

  10. On conjugate gradient type methods and polynomial preconditioners for a class of complex non-Hermitian matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1988-01-01

    Conjugate gradient type methods are considered for the solution of large linear systems Ax = b with complex coefficient matrices of the type A = T + i(sigma)I where T is Hermitian and sigma, a real scalar. Three different conjugate gradient type approaches with iterates defined by a minimal residual property, a Galerkin type condition, and an Euclidian error minimization, respectively, are investigated. In particular, numerically stable implementations based on the ideas behind Paige and Saunder's SYMMLQ and MINRES for real symmetric matrices are proposed. Error bounds for all three methods are derived. It is shown how the special shift structure of A can be preserved by using polynomial preconditioning. Results on the optimal choice of the polynomial preconditioner are given. Also, some numerical experiments for matrices arising from finite difference approximations to the complex Helmholtz equation are reported.

  11. An Exact Formula for Calculating Inverse Radial Lens Distortions

    PubMed Central

    Drap, Pierre; Lefèvre, Julien

    2016-01-01

    This article presents a new approach to calculating the inverse of radial distortions. The method presented here provides a model of reverse radial distortion, currently modeled by a polynomial expression, that proposes another polynomial expression where the new coefficients are a function of the original ones. After describing the state of the art, the proposed method is developed. It is based on a formal calculus involving a power series used to deduce a recursive formula for the new coefficients. We present several implementations of this method and describe the experiments conducted to assess the validity of the new approach. Such an approach, non-iterative, using another polynomial expression, able to be deduced from the first one, can actually be interesting in terms of performance, reuse of existing software, or bridging between different existing software tools that do not consider distortion from the same point of view. PMID:27258288

  12. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.

    PubMed

    Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range.

  13. The determination of the elastodynamic fields of an ellipsoidal inhomogeneity

    NASA Technical Reports Server (NTRS)

    Fu, L. S.; Mura, T.

    1983-01-01

    The determination of the elastodynamic fields of an ellipsoidal inhomogeneity is studied in detail via the eigenstrain approach. A complete formulation and a treatment of both types of eigenstrains for equivalence between the inhomogeneity problem and the inclusion problem are given. This approach is shown to be mathematically identical to other approaches such as the direct volume integral formulation. Expanding the eigenstrains and applied strains in the polynomial form in the position vector and satisfying the equivalence conditions at every point, the governing simultaneous algebraic equations for the unknown coefficients in the eigenstrain expansion are derived. The elastodynamic field outside an ellipsoidal inhomogeneity in a linear elastic isotropic medium is given as an example. The angular and frequency dependence of the induced displacement field, as well as the differential and total cross sections are formally given in series expansion form for the case of uniformly distributed eigenstrains.

  14. On a quadrature formula of Gori and Micchelli

    NASA Astrophysics Data System (ADS)

    Yang, Shijun

    2005-04-01

    Sparked by Bojanov (J. Comput. Appl. Math. 70 (1996) 349), we provide an alternate approach to quadrature formulas based on the zeros of the Chebyshev polynomial of the first kind for any weight function w introduced and studied in Gori and Micchelli (Math. Comp. 65 (1996) 1567), thereby improving on their observations. Upon expansion of the divided differences, we obtain explicit expressions for the corresponding Cotes coefficients in Gauss-Turan quadrature formulas for and I(fTn;w) for a Gori-Micchelli weight function. It is also interesting to mention what has been neglected for about 30 years by the literature is that, as a consequence of expansion of the divided differences in the special case when , the solution of the famous Turan's Problem 26 raised in 1980 was in fact implied by a result of Micchelli and Rivlin (IBM J. Res. Develop. 16 (1972) 372) in 1972. Some concluding comments are made in the final section.

  15. Quasi-periodic Solutions of the Kaup-Kupershmidt Hierarchy

    NASA Astrophysics Data System (ADS)

    Geng, Xianguo; Wu, Lihua; He, Guoliang

    2013-08-01

    Based on solving the Lenard recursion equations and the zero-curvature equation, we derive the Kaup-Kupershmidt hierarchy associated with a 3×3 matrix spectral problem. Resorting to the characteristic polynomial of the Lax matrix for the Kaup-Kupershmidt hierarchy, we introduce a trigonal curve {K}_{m-1} and present the corresponding Baker-Akhiezer function and meromorphic function on it. The Abel map is introduced to straighten out the Kaup-Kupershmidt flows. With the aid of the properties of the Baker-Akhiezer function and the meromorphic function and their asymptotic expansions, we arrive at their explicit Riemann theta function representations. The Riemann-Jacobi inversion problem is achieved by comparing the asymptotic expansion of the Baker-Akhiezer function and its Riemann theta function representation, from which quasi-periodic solutions of the entire Kaup-Kupershmidt hierarchy are obtained in terms of the Riemann theta functions.

  16. Expansion into lattice harmonics in cubic symmetries

    NASA Astrophysics Data System (ADS)

    Kontrym-Sznajd, G.

    2018-05-01

    On the example of a few sets of sampling directions in the Brillouin zone, this work shows how important the choice of the cubic harmonics is on the quality of approximation of some quantities by a series of such harmonics. These studies led to the following questions: (1) In the case that for a given l there are several independent harmonics, can one use in the expansion only one harmonic with a given l?; (2) How should harmonics be ordered: according to l or, after writing them in terms of (x4 + y4 + z4)n (x2y2z2)m, according to their degree q = n + m? To enable practical applications of such harmonics, they are constructed in terms of the associated Legendre polynomials up to l = 26. It is shown that electron momentum densities, reconstructed from experimental data for ErGa3 and InGa3, are described much better by harmonics ordered with q.

  17. New formulae between Jacobi polynomials and some fractional Jacobi functions generalizing some connection formulae

    NASA Astrophysics Data System (ADS)

    Abd-Elhameed, W. M.

    2017-07-01

    In this paper, a new formula relating Jacobi polynomials of arbitrary parameters with the squares of certain fractional Jacobi functions is derived. The derived formula is expressed in terms of a certain terminating hypergeometric function of the type _4F3(1) . With the aid of some standard reduction formulae such as Pfaff-Saalschütz's and Watson's identities, the derived formula can be reduced in simple forms which are free of any hypergeometric functions for certain choices of the involved parameters of the Jacobi polynomials and the Jacobi functions. Some other simplified formulae are obtained via employing some computer algebra algorithms such as the algorithms of Zeilberger, Petkovsek and van Hoeij. Some connection formulae between some Jacobi polynomials are deduced. From these connection formulae, some other linearization formulae of Chebyshev polynomials are obtained. As an application to some of the introduced formulae, a numerical algorithm for solving nonlinear Riccati differential equation is presented and implemented by applying a suitable spectral method.

  18. Computing Tutte polynomials of contact networks in classrooms

    NASA Astrophysics Data System (ADS)

    Hincapié, Doracelly; Ospina, Juan

    2013-05-01

    Objective: The topological complexity of contact networks in classrooms and the potential transmission of an infectious disease were analyzed by sex and age. Methods: The Tutte polynomials, some topological properties and the number of spanning trees were used to algebraically compute the topological complexity. Computations were made with the Maple package GraphTheory. Published data of mutually reported social contacts within a classroom taken from primary school, consisting of children in the age ranges of 4-5, 7-8 and 10-11, were used. Results: The algebraic complexity of the Tutte polynomial and the probability of disease transmission increases with age. The contact networks are not bipartite graphs, gender segregation was observed especially in younger children. Conclusion: Tutte polynomials are tools to understand the topology of the contact networks and to derive numerical indexes of such topologies. It is possible to establish relationships between the Tutte polynomial of a given contact network and the potential transmission of an infectious disease within such network

  19. Trajectory Optimization Using Adjoint Method and Chebyshev Polynomial Approximation for Minimizing Fuel Consumption During Climb

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe

    2013-01-01

    This paper describes two methods of trajectory optimization to obtain an optimal trajectory of minimum-fuel- to-climb for an aircraft. The first method is based on the adjoint method, and the second method is based on a direct trajectory optimization method using a Chebyshev polynomial approximation and cubic spine approximation. The approximate optimal trajectory will be compared with the adjoint-based optimal trajectory which is considered as the true optimal solution of the trajectory optimization problem. The adjoint-based optimization problem leads to a singular optimal control solution which results in a bang-singular-bang optimal control.

  20. Stabilization of an inverted pendulum-cart system by fractional PI-state feedback.

    PubMed

    Bettayeb, M; Boussalem, C; Mansouri, R; Al-Saggaf, U M

    2014-03-01

    This paper deals with pole placement PI-state feedback controller design to control an integer order system. The fractional aspect of the control law is introduced by a dynamic state feedback as u(t)=K(p)x(t)+K(I)I(α)(x(t)). The closed loop characteristic polynomial is thus fractional for which the roots are complex to calculate. The proposed method allows us to decompose this polynomial into a first order fractional polynomial and an integer order polynomial of order n-1 (n being the order of the integer system). This new stabilization control algorithm is applied for an inverted pendulum-cart test-bed, and the effectiveness and robustness of the proposed control are examined by experiments. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  1. Implementation and testing of the on-the-fly thermal scattering Monte Carlo sampling method for graphite and light water in MCNP6

    DOE PAGES

    Pavlou, Andrew T.; Ji, Wei; Brown, Forrest B.

    2016-01-23

    Here, a proper treatment of thermal neutron scattering requires accounting for chemical binding through a scattering law S(α,β,T). Monte Carlo codes sample the secondary neutron energy and angle after a thermal scattering event from probability tables generated from S(α,β,T) tables at discrete temperatures, requiring a large amount of data for multiscale and multiphysics problems with detailed temperature gradients. We have previously developed a method to handle this temperature dependence on-the-fly during the Monte Carlo random walk using polynomial expansions in 1/T to directly sample the secondary energy and angle. In this paper, the on-the-fly method is implemented into MCNP6 andmore » tested in both graphite-moderated and light water-moderated systems. The on-the-fly method is compared with the thermal ACE libraries that come standard with MCNP6, yielding good agreement with integral reactor quantities like k-eigenvalue and differential quantities like single-scatter secondary energy and angle distributions. The simulation runtimes are comparable between the two methods (on the order of 5–15% difference for the problems tested) and the on-the-fly fit coefficients only require 5–15 MB of total data storage.« less

  2. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  3. Dynamics of one-dimensional self-gravitating systems using Hermite-Legendre polynomials

    NASA Astrophysics Data System (ADS)

    Barnes, Eric I.; Ragan, Robert J.

    2014-01-01

    The current paradigm for understanding galaxy formation in the Universe depends on the existence of self-gravitating collisionless dark matter. Modelling such dark matter systems has been a major focus of astrophysicists, with much of that effort directed at computational techniques. Not surprisingly, a comprehensive understanding of the evolution of these self-gravitating systems still eludes us, since it involves the collective non-linear dynamics of many particle systems interacting via long-range forces described by the Vlasov equation. As a step towards developing a clearer picture of collisionless self-gravitating relaxation, we analyse the linearized dynamics of isolated one-dimensional systems near thermal equilibrium by expanding their phase-space distribution functions f(x, v) in terms of Hermite functions in the velocity variable, and Legendre functions involving the position variable. This approach produces a picture of phase-space evolution in terms of expansion coefficients, rather than spatial and velocity variables. We obtain equations of motion for the expansion coefficients for both test-particle distributions and self-gravitating linear perturbations of thermal equilibrium. N-body simulations of perturbed equilibria are performed and found to be in excellent agreement with the expansion coefficient approach over a time duration that depends on the size of the expansion series used.

  4. Using Spherical-Harmonics Expansions for Optics Surface Reconstruction from Gradients.

    PubMed

    Solano-Altamirano, Juan Manuel; Vázquez-Otero, Alejandro; Khikhlukha, Danila; Dormido, Raquel; Duro, Natividad

    2017-11-30

    In this paper, we propose a new algorithm to reconstruct optics surfaces (aka wavefronts) from gradients, defined on a circular domain, by means of the Spherical Harmonics. The experimental results indicate that this algorithm renders the same accuracy, compared to the reconstruction based on classical Zernike polynomials, using a smaller number of polynomial terms, which potentially speeds up the wavefront reconstruction. Additionally, we provide an open-source C++ library, released under the terms of the GNU General Public License version 2 (GPLv2), wherein several polynomial sets are coded. Therefore, this library constitutes a robust software alternative for wavefront reconstruction in a high energy laser field, optical surface reconstruction, and, more generally, in surface reconstruction from gradients. The library is a candidate for being integrated in control systems for optical devices, or similarly to be used in ad hoc simulations. Moreover, it has been developed with flexibility in mind, and, as such, the implementation includes the following features: (i) a mock-up generator of various incident wavefronts, intended to simulate the wavefronts commonly encountered in the field of high-energy lasers production; (ii) runtime selection of the library in charge of performing the algebraic computations; (iii) a profiling mechanism to measure and compare the performance of different steps of the algorithms and/or third-party linear algebra libraries. Finally, the library can be easily extended to include additional dependencies, such as porting the algebraic operations to specific architectures, in order to exploit hardware acceleration features.

  5. Using Spherical-Harmonics Expansions for Optics Surface Reconstruction from Gradients

    PubMed Central

    Solano-Altamirano, Juan Manuel; Khikhlukha, Danila

    2017-01-01

    In this paper, we propose a new algorithm to reconstruct optics surfaces (aka wavefronts) from gradients, defined on a circular domain, by means of the Spherical Harmonics. The experimental results indicate that this algorithm renders the same accuracy, compared to the reconstruction based on classical Zernike polynomials, using a smaller number of polynomial terms, which potentially speeds up the wavefront reconstruction. Additionally, we provide an open-source C++ library, released under the terms of the GNU General Public License version 2 (GPLv2), wherein several polynomial sets are coded. Therefore, this library constitutes a robust software alternative for wavefront reconstruction in a high energy laser field, optical surface reconstruction, and, more generally, in surface reconstruction from gradients. The library is a candidate for being integrated in control systems for optical devices, or similarly to be used in ad hoc simulations. Moreover, it has been developed with flexibility in mind, and, as such, the implementation includes the following features: (i) a mock-up generator of various incident wavefronts, intended to simulate the wavefronts commonly encountered in the field of high-energy lasers production; (ii) runtime selection of the library in charge of performing the algebraic computations; (iii) a profiling mechanism to measure and compare the performance of different steps of the algorithms and/or third-party linear algebra libraries. Finally, the library can be easily extended to include additional dependencies, such as porting the algebraic operations to specific architectures, in order to exploit hardware acceleration features. PMID:29189722

  6. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression.

    PubMed

    Ding, A Adam; Wu, Hulin

    2014-10-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.

  7. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression

    PubMed Central

    Ding, A. Adam; Wu, Hulin

    2015-01-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method. PMID:26401093

  8. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  9. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE PAGES

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    2016-11-28

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  10. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cieplak, Agnieszka M.; Slosar, Anze

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation overmore » mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. In conclusion, we find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.« less

  11. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cieplak, Agnieszka M.; Slosar, Anže, E-mail: acieplak@bnl.gov, E-mail: anze@bnl.gov

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n -th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisationmore » over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.« less

  12. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    DOE PAGES

    Cieplak, Agnieszka M.; Slosar, Anze

    2017-10-12

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation overmore » mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. In conclusion, we find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.« less

  13. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    NASA Astrophysics Data System (ADS)

    Cieplak, Agnieszka M.; Slosar, Anže

    2017-10-01

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.

  14. Itô-SDE MCMC method for Bayesian characterization of errors associated with data limitations in stochastic expansion methods for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Arnst, M.; Abello Álvarez, B.; Ponthot, J.-P.; Boman, R.

    2017-11-01

    This paper is concerned with the characterization and the propagation of errors associated with data limitations in polynomial-chaos-based stochastic methods for uncertainty quantification. Such an issue can arise in uncertainty quantification when only a limited amount of data is available. When the available information does not suffice to accurately determine the probability distributions that must be assigned to the uncertain variables, the Bayesian method for assigning these probability distributions becomes attractive because it allows the stochastic model to account explicitly for insufficiency of the available information. In previous work, such applications of the Bayesian method had already been implemented by using the Metropolis-Hastings and Gibbs Markov Chain Monte Carlo (MCMC) methods. In this paper, we present an alternative implementation, which uses an alternative MCMC method built around an Itô stochastic differential equation (SDE) that is ergodic for the Bayesian posterior. We draw together from the mathematics literature a number of formal properties of this Itô SDE that lend support to its use in the implementation of the Bayesian method, and we describe its discretization, including the choice of the free parameters, by using the implicit Euler method. We demonstrate the proposed methodology on a problem of uncertainty quantification in a complex nonlinear engineering application relevant to metal forming.

  15. A parallel algorithm for computing the eigenvalues of a symmetric tridiagonal matrix

    NASA Technical Reports Server (NTRS)

    Swarztrauber, Paul N.

    1993-01-01

    A parallel algorithm, called polysection, is presented for computing the eigenvalues of a symmetric tridiagonal matrix. The method is based on a quadratic recurrence in which the characteristic polynomial is constructed on a binary tree from polynomials whose degree doubles at each level. Intervals that contain exactly one zero are determined by the zeros of polynomials at the previous level which ensures that different processors compute different zeros. The signs of the polynomials at the interval endpoints are determined a priori and used to guarantee that all zeros are found. The use of finite-precision arithmetic may result in multiple zeros; however, in this case, the intervals coalesce and their number determines exactly the multiplicity of the zero. For an N x N matrix the eigenvalues can be determined in O(log-squared N) time with N-squared processors and O(N) time with N processors. The method is compared with a parallel variant of bisection that requires O(N-squared) time on a single processor, O(N) time with N processors, and O(log N) time with N-squared processors.

  16. Polynomial sequences for bond percolation critical thresholds

    DOE PAGES

    Scullard, Christian R.

    2011-09-22

    In this paper, I compute the inhomogeneous (multi-probability) bond critical surfaces for the (4, 6, 12) and (3 4, 6) using the linearity approximation described in (Scullard and Ziff, J. Stat. Mech. 03021), implemented as a branching process of lattices. I find the estimates for the bond percolation thresholds, pc(4, 6, 12) = 0.69377849... and p c(3 4, 6) = 0.43437077..., compared with Parviainen’s numerical results of p c = 0.69373383... and p c = 0.43430621... . These deviations are of the order 10 -5, as is standard for this method. Deriving thresholds in this way for a given latticemore » leads to a polynomial with integer coefficients, the root in [0, 1] of which gives the estimate for the bond threshold and I show how the method can be refined, leading to a series of higher order polynomials making predictions that likely converge to the exact answer. Finally, I discuss how this fact hints that for certain graphs, such as the kagome lattice, the exact bond threshold may not be the root of any polynomial with integer coefficients.« less

  17. Information entropy of Gegenbauer polynomials and Gaussian quadrature

    NASA Astrophysics Data System (ADS)

    Sánchez-Ruiz, Jorge

    2003-05-01

    In a recent paper (Buyarov V S, López-Artés P, Martínez-Finkelshtein A and Van Assche W 2000 J. Phys. A: Math. Gen. 33 6549-60), an efficient method was provided for evaluating in closed form the information entropy of the Gegenbauer polynomials C(lambda)n(x) in the case when lambda = l in Bbb N. For given values of n and l, this method requires the computation by means of recurrence relations of two auxiliary polynomials, P(x) and H(x), of degrees 2l - 2 and 2l - 4, respectively. Here it is shown that P(x) is related to the coefficients of the Gaussian quadrature formula for the Gegenbauer weights wl(x) = (1 - x2)l-1/2, and this fact is used to obtain the explicit expression of P(x). From this result, an explicit formula is also given for the polynomial S(x) = limnrightarrowinfty P(1 - x/(2n2)), which is relevant to the study of the asymptotic (n rightarrow infty with l fixed) behaviour of the entropy.

  18. Construction of Response Surface with Higher Order Continuity and Its Application to Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Romero, V. J.

    2002-01-01

    The usefulness of piecewise polynomials with C1 and C2 derivative continuity for response surface construction method is examined. A Moving Least Squares (MLS) method is developed and compared with four other interpolation methods, including kriging. First the selected methods are applied and compared with one another in a two-design variables problem with a known theoretical response function. Next the methods are tested in a four-design variables problem from a reliability-based design application. In general the piecewise polynomial with higher order derivative continuity methods produce less error in the response prediction. The MLS method was found to be superior for response surface construction among the methods evaluated.

  19. Identification of stochastic interactions in nonlinear models of structural mechanics

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk

    2017-07-01

    In the paper, the polynomial approximation is presented by which the Sobol sensitivity analysis can be evaluated with all sensitivity indices. The nonlinear FEM model is approximated. The input area is mapped using simulations runs of Latin Hypercube Sampling method. The domain of the approximation polynomial is chosen so that it were possible to apply large number of simulation runs of Latin Hypercube Sampling method. The method presented also makes possible to evaluate higher-order sensitivity indices, which could not be identified in case of nonlinear FEM.

  20. Method of Characteristics Calculations and Computer Code for Materials with Arbitrary Equations of State and Using Orthogonal Polynomial Least Square Surface Fits

    NASA Technical Reports Server (NTRS)

    Chang, T. S.

    1974-01-01

    A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.

  1. Correlation between external and internal respiratory motion: a validation study.

    PubMed

    Ernst, Floris; Bruder, Ralf; Schlaefer, Alexander; Schweikard, Achim

    2012-05-01

    In motion-compensated image-guided radiotherapy, accurate tracking of the target region is required. This tracking process includes building a correlation model between external surrogate motion and the motion of the target region. A novel correlation method is presented and compared with the commonly used polynomial model. The CyberKnife system (Accuray, Inc., Sunnyvale/CA) uses a polynomial correlation model to relate externally measured surrogate data (optical fibres on the patient's chest emitting red light) to infrequently acquired internal measurements (X-ray data). A new correlation algorithm based on ɛ -Support Vector Regression (SVR) was developed. Validation and comparison testing were done with human volunteers using live 3D ultrasound and externally measured infrared light-emitting diodes (IR LEDs). Seven data sets (5:03-6:27 min long) were recorded from six volunteers. Polynomial correlation algorithms were compared to the SVR-based algorithm demonstrating an average increase in root mean square (RMS) accuracy of 21.3% (0.4 mm). For three signals, the increase was more than 29% and for one signal as much as 45.6% (corresponding to more than 1.5 mm RMS). Further analysis showed the improvement to be statistically significant. The new SVR-based correlation method outperforms traditional polynomial correlation methods for motion tracking. This method is suitable for clinical implementation and may improve the overall accuracy of targeted radiotherapy.

  2. Numeric model to predict the location of market demand and economic order quantity for retailers of supply chain

    NASA Astrophysics Data System (ADS)

    Fradinata, Edy; Marli Kesuma, Zurnila

    2018-05-01

    Polynomials and Spline regression are the numeric model where they used to obtain the performance of methods, distance relationship models for cement retailers in Banda Aceh, predicts the market area for retailers and the economic order quantity (EOQ). These numeric models have their difference accuracy for measuring the mean square error (MSE). The distance relationships between retailers are to identify the density of retailers in the town. The dataset is collected from the sales of cement retailer with a global positioning system (GPS). The sales dataset is plotted of its characteristic to obtain the goodness of fitted quadratic, cubic, and fourth polynomial methods. On the real sales dataset, polynomials are used the behavior relationship x-abscissa and y-ordinate to obtain the models. This research obtains some advantages such as; the four models from the methods are useful for predicting the market area for the retailer in the competitiveness, the comparison of the performance of the methods, the distance of the relationship between retailers, and at last the inventory policy based on economic order quantity. The results, the high-density retail relationship areas indicate that the growing population with the construction project. The spline is better than quadratic, cubic, and four polynomials in predicting the points indicating of small MSE. The inventory policy usages the periodic review policy type.

  3. Inelastic scattering with Chebyshev polynomials and preconditioned conjugate gradient minimization.

    PubMed

    Temel, Burcin; Mills, Greg; Metiu, Horia

    2008-03-27

    We describe and test an implementation, using a basis set of Chebyshev polynomials, of a variational method for solving scattering problems in quantum mechanics. This minimum error method (MEM) determines the wave function Psi by minimizing the least-squares error in the function (H Psi - E Psi), where E is the desired scattering energy. We compare the MEM to an alternative, the Kohn variational principle (KVP), by solving the Secrest-Johnson model of two-dimensional inelastic scattering, which has been studied previously using the KVP and for which other numerical solutions are available. We use a conjugate gradient (CG) method to minimize the error, and by preconditioning the CG search, we are able to greatly reduce the number of iterations necessary; the method is thus faster and more stable than a matrix inversion, as is required in the KVP. Also, we avoid errors due to scattering off of the boundaries, which presents substantial problems for other methods, by matching the wave function in the interaction region to the correct asymptotic states at the specified energy; the use of Chebyshev polynomials allows this boundary condition to be implemented accurately. The use of Chebyshev polynomials allows for a rapid and accurate evaluation of the kinetic energy. This basis set is as efficient as plane waves but does not impose an artificial periodicity on the system. There are problems in surface science and molecular electronics which cannot be solved if periodicity is imposed, and the Chebyshev basis set is a good alternative in such situations.

  4. How many invariant polynomials are needed to decide local unitary equivalence of qubit states?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maciążek, Tomasz; Faculty of Physics, University of Warsaw, ul. Hoża 69, 00-681 Warszawa; Oszmaniec, Michał

    2013-09-15

    Given L-qubit states with the fixed spectra of reduced one-qubit density matrices, we find a formula for the minimal number of invariant polynomials needed for solving local unitary (LU) equivalence problem, that is, problem of deciding if two states can be connected by local unitary operations. Interestingly, this number is not the same for every collection of the spectra. Some spectra require less polynomials to solve LU equivalence problem than others. The result is obtained using geometric methods, i.e., by calculating the dimensions of reduced spaces, stemming from the symplectic reduction procedure.

  5. A Study on Gröbner Basis with Inexact Input

    NASA Astrophysics Data System (ADS)

    Nagasaka, Kosaku

    Gröbner basis is one of the most important tools in recent symbolic algebraic computations. However, computing a Gröbner basis for the given polynomial ideal is not easy and it is not numerically stable if polynomials have inexact coefficients. In this paper, we study what we should get for computing a Gröbner basis with inexact coefficients and introduce a naive method to compute a Gröbner basis by reduced row echelon form, for the ideal generated by the given polynomial set having a priori errors on their coefficients.

  6. Polynomial approximation of Poincare maps for Hamiltonian system

    NASA Technical Reports Server (NTRS)

    Froeschle, Claude; Petit, Jean-Marc

    1992-01-01

    Different methods are proposed and tested for transforming a non-linear differential system, and more particularly a Hamiltonian one, into a map without integrating the whole orbit as in the well-known Poincare return map technique. We construct piecewise polynomial maps by coarse-graining the phase-space surface of section into parallelograms and using either only values of the Poincare maps at the vertices or also the gradient information at the nearest neighbors to define a polynomial approximation within each cell. The numerical experiments are in good agreement with both the real symplectic and Poincare maps.

  7. Uncertainty Propagation for Turbulent, Compressible Flow in a Quasi-1D Nozzle Using Stochastic Methods

    NASA Technical Reports Server (NTRS)

    Zang, Thomas A.; Mathelin, Lionel; Hussaini, M. Yousuff; Bataille, Francoise

    2003-01-01

    This paper describes a fully spectral, Polynomial Chaos method for the propagation of uncertainty in numerical simulations of compressible, turbulent flow, as well as a novel stochastic collocation algorithm for the same application. The stochastic collocation method is key to the efficient use of stochastic methods on problems with complex nonlinearities, such as those associated with the turbulence model equations in compressible flow and for CFD schemes requiring solution of a Riemann problem. Both methods are applied to compressible flow in a quasi-one-dimensional nozzle. The stochastic collocation method is roughly an order of magnitude faster than the fully Galerkin Polynomial Chaos method on the inviscid problem.

  8. Bayesian B-spline mapping for dynamic quantitative traits.

    PubMed

    Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong

    2012-04-01

    Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.

  9. Approximate tensor-product preconditioners for very high order discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Pazner, Will; Persson, Per-Olof

    2018-02-01

    In this paper, we develop a new tensor-product based preconditioner for discontinuous Galerkin methods with polynomial degrees higher than those typically employed. This preconditioner uses an automatic, purely algebraic method to approximate the exact block Jacobi preconditioner by Kronecker products of several small, one-dimensional matrices. Traditional matrix-based preconditioners require O (p2d) storage and O (p3d) computational work, where p is the degree of basis polynomials used, and d is the spatial dimension. Our SVD-based tensor-product preconditioner requires O (p d + 1) storage, O (p d + 1) work in two spatial dimensions, and O (p d + 2) work in three spatial dimensions. Combined with a matrix-free Newton-Krylov solver, these preconditioners allow for the solution of DG systems in linear time in p per degree of freedom in 2D, and reduce the computational complexity from O (p9) to O (p5) in 3D. Numerical results are shown in 2D and 3D for the advection, Euler, and Navier-Stokes equations, using polynomials of degree up to p = 30. For many test cases, the preconditioner results in similar iteration counts when compared with the exact block Jacobi preconditioner, and performance is significantly improved for high polynomial degrees p.

  10. An atlas of Rapp's 180-th order geopotential.

    NASA Astrophysics Data System (ADS)

    Melvin, P. J.

    1986-08-01

    Deprit's 1979 approach to the summation of the spherical harmonic expansion of the geopotential has been modified to spherical components and normalized Legendre polynomials. An algorithm has been developed which produces ten fields at the users option: the undulations of the geoid, three anomalous components of the gravity vector, or six components of the Hessian of the geopotential (gravity gradient). The algorithm is stable to high orders in single precision and does not treat the polar regions as a special case. Eleven contour maps of components of the anomalous geopotential on the surface of the ellipsoid are presented to validate the algorithm.

  11. Quantum calculus of classical vortex images, integrable models and quantum states

    NASA Astrophysics Data System (ADS)

    Pashaev, Oktay K.

    2016-10-01

    From two circle theorem described in terms of q-periodic functions, in the limit q→1 we have derived the strip theorem and the stream function for N vortex problem. For regular N-vortex polygon we find compact expression for the velocity of uniform rotation and show that it represents a nonlinear oscillator. We describe q-dispersive extensions of the linear and nonlinear Schrodinger equations, as well as the q-semiclassical expansions in terms of Bernoulli and Euler polynomials. Different kind of q-analytic functions are introduced, including the pq-analytic and the golden analytic functions.

  12. A MULTISCALE FRAMEWORK FOR THE STOCHASTIC ASSIMILATION AND MODELING OF UNCERTAINTY ASSOCIATED NCF COMPOSITE MATERIALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehrez, Loujaine; Ghanem, Roger; McAuliffe, Colin

    multiscale framework to construct stochastic macroscopic constitutive material models is proposed. A spectral projection approach, specifically polynomial chaos expansion, has been used to construct explicit functional relationships between the homogenized properties and input parameters from finer scales. A homogenization engine embedded in Multiscale Designer, software for composite materials, has been used for the upscaling process. The framework is demonstrated using non-crimp fabric composite materials by constructing probabilistic models of the homogenized properties of a non-crimp fabric laminate in terms of the input parameters together with the homogenized properties from finer scales.

  13. Effect of load introduction on graphite epoxy compression specimens

    NASA Technical Reports Server (NTRS)

    Reiss, R.; Yao, T. M.

    1981-01-01

    Compression testing of modern composite materials is affected by the manner in which the compressive load is introduced. Two such effects are investigated: (1) the constrained edge effect which prevents transverse expansion and is common to all compression testing in which the specimen is gripped in the fixture; and (2) nonuniform gripping which induces bending into the specimen. An analytical model capable of quantifying these foregoing effects was developed which is based upon the principle of minimum complementary energy. For pure compression, the stresses are approximated by Fourier series. For pure bending, the stresses are approximated by Legendre polynomials.

  14. A State Event Detection Algorithm for Numerically Simulating Hybrid Systems with Model Singularities

    DTIC Science & Technology

    2007-01-01

    the case of non- constant step sizes. Therefore the event dynamics after the predictor and corrector phases are, respectively, gpk +1 = g( xk + hk+1{ m...the Extrapolation Polynomial Using a Taylor series expansion of the predicted event function eq.(6) gpk +1 = gk + hk+1 dgp dt ∣∣∣∣ (x,t)=(xk,tk) + h2k...1 2! d2gp dt2 ∣∣∣∣ (x,t)=(xk,tk) + . . . , (8) we can determine the value of gpk +1 as a function of the, yet undetermined, step size hk+1. Recalling

  15. Biophysical applications of neutron Compton scattering

    NASA Astrophysics Data System (ADS)

    Wanderlingh, U. N.; Albergamo, F.; Hayward, R. L.; Middendorf, H. D.

    Neutron Compton scattering (NCS) can be applied to measuring nuclear momentum distributions and potential parameters in molecules of biophysical interest. We discuss the analysis of NCS spectra from peptide models, focusing on the characterisation of the amide proton dynamics in terms of the width of the H-bond potential well, its Laplacian, and the mean kinetic energy of the proton. The Sears expansion is used to quantify deviations from the high-Q limit (impulse approximation), and line-shape asymmetry parameters are evaluated in terms of Hermite polynomials. Results on NCS from selectively deuterated acetanilide are used to illustrate this approach.

  16. Stitching interferometry of a full cylinder without using overlap areas

    NASA Astrophysics Data System (ADS)

    Peng, Junzheng; Chen, Dingfu; Yu, Yingjie

    2017-08-01

    Traditional stitching interferometry requires finding out the overlap correspondence and computing the discrepancies in the overlap regions, which makes it complex and time-consuming to obtain the 360° form map of a cylinder. In this paper, we develop a cylinder stitching model based on a new set of orthogonal polynomials, termed Legendre Fourier (LF) polynomials. With these polynomials, individual subaperture data can be expanded as a composition of the inherent form of a partial cylinder surface and additional misalignment parameters. Then the 360° form map can be acquired by simultaneously fitting all subaperture data with the LF polynomials. A metal shaft was measured to experimentally verify the proposed method. In contrast to traditional stitching interferometry, our technique does not require overlapping of adjacent subapertures, thus significantly reducing the measurement time and making the stitching algorithm simple.

  17. Symbolic computation of recurrence equations for the Chebyshev series solution of linear ODE's. [ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Geddes, K. O.

    1977-01-01

    If a linear ordinary differential equation with polynomial coefficients is converted into integrated form then the formal substitution of a Chebyshev series leads to recurrence equations defining the Chebyshev coefficients of the solution function. An explicit formula is presented for the polynomial coefficients of the integrated form in terms of the polynomial coefficients of the differential form. The symmetries arising from multiplication and integration of Chebyshev polynomials are exploited in deriving a general recurrence equation from which can be derived all of the linear equations defining the Chebyshev coefficients. Procedures for deriving the general recurrence equation are specified in a precise algorithmic notation suitable for translation into any of the languages for symbolic computation. The method is algebraic and it can therefore be applied to differential equations containing indeterminates.

  18. On adaptive weighted polynomial preconditioning for Hermitian positive definite matrices

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland W.

    1992-01-01

    The conjugate gradient algorithm for solving Hermitian positive definite linear systems is usually combined with preconditioning in order to speed up convergence. In recent years, there has been a revival of polynomial preconditioning, motivated by the attractive features of the method on modern architectures. Standard techniques for choosing the preconditioning polynomial are based only on bounds for the extreme eigenvalues. Here a different approach is proposed, which aims at adapting the preconditioner to the eigenvalue distribution of the coefficient matrix. The technique is based on the observation that good estimates for the eigenvalue distribution can be derived after only a few steps of the Lanczos process. This information is then used to construct a weight function for a suitable Chebyshev approximation problem. The solution of this problem yields the polynomial preconditioner. In particular, we investigate the use of Bernstein-Szego weights.

  19. Polynomial approximations of thermodynamic properties of arbitrary gas mixtures over wide pressure and density ranges

    NASA Technical Reports Server (NTRS)

    Allison, D. O.

    1972-01-01

    Computer programs for flow fields around planetary entry vehicles require real-gas equilibrium thermodynamic properties in a simple form which can be evaluated quickly. To fill this need, polynomial approximations were found for thermodynamic properties of air and model planetary atmospheres. A coefficient-averaging technique was used for curve fitting in lieu of the usual least-squares method. The polynomials consist of terms up to the ninth degree in each of two variables (essentially pressure and density) including all cross terms. Four of these polynomials can be joined to cover, for example, a range of about 1000 to 11000 K and 0.00001 to 1 atmosphere (1 atm = 1.0133 x 100,000 N/m sq) for a given thermodynamic property. Relative errors of less than 1 percent are found over most of the applicable range.

  20. Model-independent analyses of non-Gaussianity in Planck CMB maps using Minkowski functionals

    NASA Astrophysics Data System (ADS)

    Buchert, Thomas; France, Martin J.; Steiner, Frank

    2017-05-01

    Despite the wealth of Planck results, there are difficulties in disentangling the primordial non-Gaussianity of the Cosmic Microwave Background (CMB) from the secondary and the foreground non-Gaussianity (NG). For each of these forms of NG the lack of complete data introduces model-dependences. Aiming at detecting the NGs of the CMB temperature anisotropy δ T , while paying particular attention to a model-independent quantification of NGs, our analysis is based upon statistical and morphological univariate descriptors, respectively: the probability density function P(δ T) , related to v0, the first Minkowski Functional (MF), and the two other MFs, v1 and v2. From their analytical Gaussian predictions we build the discrepancy functions {{ Δ }k} (k  =  P, 0, 1, 2) which are applied to an ensemble of 105 CMB realization maps of the Λ CDM model and to the Planck CMB maps. In our analysis we use general Hermite expansions of the {{ Δ }k} up to the 12th order, where the coefficients are explicitly given in terms of cumulants. Assuming hierarchical ordering of the cumulants, we obtain the perturbative expansions generalizing the second order expansions of Matsubara to arbitrary order in the standard deviation {σ0} for P(δ T) and v0, where the perturbative expansion coefficients are explicitly given in terms of complete Bell polynomials. The comparison of the Hermite expansions and the perturbative expansions is performed for the Λ CDM map sample and the Planck data. We confirm the weak level of non-Gaussianity (1-2)σ of the foreground corrected masked Planck 2015 maps.

  1. Automatic bone outer contour extraction from B-modes ultrasound images based on local phase symmetry and quadratic polynomial fitting

    NASA Astrophysics Data System (ADS)

    Karlita, Tita; Yuniarno, Eko Mulyanto; Purnama, I. Ketut Eddy; Purnomo, Mauridhi Hery

    2017-06-01

    Analyzing ultrasound (US) images to get the shapes and structures of particular anatomical regions is an interesting field of study since US imaging is a non-invasive method to capture internal structures of a human body. However, bone segmentation of US images is still challenging because it is strongly influenced by speckle noises and it has poor image quality. This paper proposes a combination of local phase symmetry and quadratic polynomial fitting methods to extract bone outer contour (BOC) from two dimensional (2D) B-modes US image as initial steps of three-dimensional (3D) bone surface reconstruction. By using local phase symmetry, the bone is initially extracted from US images. BOC is then extracted by scanning one pixel on the bone boundary in each column of the US images using first phase features searching method. Quadratic polynomial fitting is utilized to refine and estimate the pixel location that fails to be detected during the extraction process. Hole filling method is then applied by utilize the polynomial coefficients to fill the gaps with new pixel. The proposed method is able to estimate the new pixel position and ensures smoothness and continuity of the contour path. Evaluations are done using cow and goat bones by comparing the resulted BOCs with the contours produced by manual segmentation and contours produced by canny edge detection. The evaluation shows that our proposed methods produces an excellent result with average MSE before and after hole filling at the value of 0.65.

  2. Toward a New Method of Decoding Algebraic Codes Using Groebner Bases

    DTIC Science & Technology

    1993-10-01

    variables over GF(2m). A celebrated algorithm by Buchberger produces a reduced Groebner basis of that ideal. It tums out that, since the common roots of...all the polynomials in the ideal are a set of isolated points, this reduced Groebner basis is in triangular form, and the univariate polynomial in that

  3. Incomplete Gröbner basis as a preconditioner for polynomial systems

    NASA Astrophysics Data System (ADS)

    Sun, Yang; Tao, Yu-Hui; Bai, Feng-Shan

    2009-04-01

    Precondition plays a critical role in the numerical methods for large and sparse linear systems. It is also true for nonlinear algebraic systems. In this paper incomplete Gröbner basis (IGB) is proposed as a preconditioner of homotopy methods for polynomial systems of equations, which transforms a deficient system into a system with the same finite solutions, but smaller degree. The reduced system can thus be solved faster. Numerical results show the efficiency of the preconditioner.

  4. The use of rational functions in numerical quadrature

    NASA Astrophysics Data System (ADS)

    Gautschi, Walter

    2001-08-01

    Quadrature problems involving functions that have poles outside the interval of integration can profitably be solved by methods that are exact not only for polynomials of appropriate degree, but also for rational functions having the same (or the most important) poles as the function to be integrated. Constructive and computational tools for accomplishing this are described and illustrated in a number of quadrature contexts. The superiority of such rational/polynomial methods is shown by an analysis of the remainder term and documented by numerical examples.

  5. Hybrid High-Order methods for finite deformations of hyperelastic materials

    NASA Astrophysics Data System (ADS)

    Abbas, Mickaël; Ern, Alexandre; Pignet, Nicolas

    2018-01-01

    We devise and evaluate numerically Hybrid High-Order (HHO) methods for hyperelastic materials undergoing finite deformations. The HHO methods use as discrete unknowns piecewise polynomials of order k≥1 on the mesh skeleton, together with cell-based polynomials that can be eliminated locally by static condensation. The discrete problem is written as the minimization of a broken nonlinear elastic energy where a local reconstruction of the displacement gradient is used. Two HHO methods are considered: a stabilized method where the gradient is reconstructed as a tensor-valued polynomial of order k and a stabilization is added to the discrete energy functional, and an unstabilized method which reconstructs a stable higher-order gradient and circumvents the need for stabilization. Both methods satisfy the principle of virtual work locally with equilibrated tractions. We present a numerical study of the two HHO methods on test cases with known solution and on more challenging three-dimensional test cases including finite deformations with strong shear layers and cavitating voids. We assess the computational efficiency of both methods, and we compare our results to those obtained with an industrial software using conforming finite elements and to results from the literature. The two HHO methods exhibit robust behavior in the quasi-incompressible regime.

  6. Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.

    PubMed

    Dick, Bernhard

    2014-01-14

    A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.

  7. Jack Polynomials as Fractional Quantum Hall States and the Betti Numbers of the ( k + 1)-Equals Ideal

    NASA Astrophysics Data System (ADS)

    Zamaere, Christine Berkesch; Griffeth, Stephen; Sam, Steven V.

    2014-08-01

    We show that for Jack parameter α = -( k + 1)/( r - 1), certain Jack polynomials studied by Feigin-Jimbo-Miwa-Mukhin vanish to order r when k + 1 of the coordinates coincide. This result was conjectured by Bernevig and Haldane, who proposed that these Jack polynomials are model wavefunctions for fractional quantum Hall states. Special cases of these Jack polynomials include the wavefunctions of Laughlin and Read-Rezayi. In fact, along these lines we prove several vanishing theorems known as clustering properties for Jack polynomials in the mathematical physics literature, special cases of which had previously been conjectured by Bernevig and Haldane. Motivated by the method of proof, which in the case r = 2 identifies the span of the relevant Jack polynomials with the S n -invariant part of a unitary representation of the rational Cherednik algebra, we conjecture that unitary representations of the type A Cherednik algebra have graded minimal free resolutions of Bernstein-Gelfand-Gelfand type; we prove this for the ideal of the ( k + 1)-equals arrangement in the case when the number of coordinates n is at most 2 k + 1. In general, our conjecture predicts the graded S n -equivariant Betti numbers of the ideal of the ( k + 1)-equals arrangement with no restriction on the number of ambient dimensions.

  8. Phase unwrapping algorithm using polynomial phase approximation and linear Kalman filter.

    PubMed

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-02-01

    A noise-robust phase unwrapping algorithm is proposed based on state space analysis and polynomial phase approximation using wrapped phase measurement. The true phase is approximated as a two-dimensional first order polynomial function within a small sized window around each pixel. The estimates of polynomial coefficients provide the measurement of phase and local fringe frequencies. A state space representation of spatial phase evolution and the wrapped phase measurement is considered with the state vector consisting of polynomial coefficients as its elements. Instead of using the traditional nonlinear Kalman filter for the purpose of state estimation, we propose to use the linear Kalman filter operating directly with the wrapped phase measurement. The adaptive window width is selected at each pixel based on the local fringe density to strike a balance between the computation time and the noise robustness. In order to retrieve the unwrapped phase, either a line-scanning approach or a quality guided strategy of pixel selection is used depending on the underlying continuous or discontinuous phase distribution, respectively. Simulation and experimental results are provided to demonstrate the applicability of the proposed method.

  9. Element Library for Three-Dimensional Stress Analysis by the Integrated Force Method

    NASA Technical Reports Server (NTRS)

    Kaljevic, Igor; Patnaik, Surya N.; Hopkins, Dale A.

    1996-01-01

    The Integrated Force Method, a recently developed method for analyzing structures, is extended in this paper to three-dimensional structural analysis. First, a general formulation is developed to generate the stress interpolation matrix in terms of complete polynomials of the required order. The formulation is based on definitions of the stress tensor components in term of stress functions. The stress functions are written as complete polynomials and substituted into expressions for stress components. Then elimination of the dependent coefficients leaves the stress components expressed as complete polynomials whose coefficients are defined as generalized independent forces. Such derived components of the stress tensor identically satisfy homogenous Navier equations of equilibrium. The resulting element matrices are invariant with respect to coordinate transformation and are free of spurious zero-energy modes. The formulation provides a rational way to calculate the exact number of independent forces necessary to arrive at an approximation of the required order for complete polynomials. The influence of reducing the number of independent forces on the accuracy of the response is also analyzed. The stress fields derived are used to develop a comprehensive finite element library for three-dimensional structural analysis by the Integrated Force Method. Both tetrahedral- and hexahedral-shaped elements capable of modeling arbitrary geometric configurations are developed. A number of examples with known analytical solutions are solved by using the developments presented herein. The results are in good agreement with the analytical solutions. The responses obtained with the Integrated Force Method are also compared with those generated by the standard displacement method. In most cases, the performance of the Integrated Force Method is better overall.

  10. Matrix form of Legendre polynomials for solving linear integro-differential equations of high order

    NASA Astrophysics Data System (ADS)

    Kammuji, M.; Eshkuvatov, Z. K.; Yunus, Arif A. M.

    2017-04-01

    This paper presents an effective approximate solution of high order of Fredholm-Volterra integro-differential equations (FVIDEs) with boundary condition. Legendre truncated series is used as a basis functions to estimate the unknown function. Matrix operation of Legendre polynomials is used to transform FVIDEs with boundary conditions into matrix equation of Fredholm-Volterra type. Gauss Legendre quadrature formula and collocation method are applied to transfer the matrix equation into system of linear algebraic equations. The latter equation is solved by Gauss elimination method. The accuracy and validity of this method are discussed by solving two numerical examples and comparisons with wavelet and methods.

  11. Demodulation of moire fringes in digital holographic interferometry using an extended Kalman filter.

    PubMed

    Ramaiah, Jagadesh; Rastogi, Pramod; Rajshekhar, Gannavarpu

    2018-03-10

    This paper presents a method for extracting multiple phases from a single moire fringe pattern in digital holographic interferometry. The method relies on component separation using singular value decomposition and an extended Kalman filter for demodulating the moire fringes. The Kalman filter is applied by modeling the interference field locally as a multi-component polynomial phase signal and extracting the associated multiple polynomial coefficients using the state space approach. In addition to phase, the corresponding multiple phase derivatives can be simultaneously extracted using the proposed method. The applicability of the proposed method is demonstrated using simulation and experimental results.

  12. Model-based multi-fringe interferometry using Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Gu, Wei; Song, Weihong; Wu, Gaofeng; Quan, Haiyang; Wu, Yongqian; Zhao, Wenchuan

    2018-06-01

    In this paper, a general phase retrieval method is proposed, which is based on one single interferogram with a small amount of fringes (either tilt or power). Zernike polynomials are used to characterize the phase to be measured; the phase distribution is reconstructed by a non-linear least squares method. Experiments show that the proposed method can obtain satisfactory results compared to the standard phase-shifting interferometry technique. Additionally, the retrace errors of proposed method can be neglected because of the few fringes; it does not need any auxiliary phase shifting facilities (low cost) and it is easy to implement without the process of phase unwrapping.

  13. Using Chebyshev polynomials and approximate inverse triangular factorizations for preconditioning the conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Kaporin, I. E.

    2012-02-01

    In order to precondition a sparse symmetric positive definite matrix, its approximate inverse is examined, which is represented as the product of two sparse mutually adjoint triangular matrices. In this way, the solution of the corresponding system of linear algebraic equations (SLAE) by applying the preconditioned conjugate gradient method (CGM) is reduced to performing only elementary vector operations and calculating sparse matrix-vector products. A method for constructing the above preconditioner is described and analyzed. The triangular factor has a fixed sparsity pattern and is optimal in the sense that the preconditioned matrix has a minimum K-condition number. The use of polynomial preconditioning based on Chebyshev polynomials makes it possible to considerably reduce the amount of scalar product operations (at the cost of an insignificant increase in the total number of arithmetic operations). The possibility of an efficient massively parallel implementation of the resulting method for solving SLAEs is discussed. For a sequential version of this method, the results obtained by solving 56 test problems from the Florida sparse matrix collection (which are large-scale and ill-conditioned) are presented. These results show that the method is highly reliable and has low computational costs.

  14. Quadratic polynomial interpolation on triangular domain

    NASA Astrophysics Data System (ADS)

    Li, Ying; Zhang, Congcong; Yu, Qian

    2018-04-01

    In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.

  15. Exploring the limits of cryospectroscopy: Least-squares based approaches for analyzing the self-association of HCl.

    PubMed

    De Beuckeleer, Liene I; Herrebout, Wouter A

    2016-02-05

    To rationalize the concentration dependent behavior observed for a large spectral data set of HCl recorded in liquid argon, least-squares based numerical methods are developed and validated. In these methods, for each wavenumber a polynomial is used to mimic the relation between monomer concentrations and measured absorbances. Least-squares fitting of higher degree polynomials tends to overfit and thus leads to compensation effects where a contribution due to one species is compensated for by a negative contribution of another. The compensation effects are corrected for by carefully analyzing, using AIC and BIC information criteria, the differences observed between consecutive fittings when the degree of the polynomial model is systematically increased, and by introducing constraints prohibiting negative absorbances to occur for the monomer or for one of the oligomers. The method developed should allow other, more complicated self-associating systems to be analyzed with a much higher accuracy than before. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Bayer Demosaicking with Polynomial Interpolation.

    PubMed

    Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil

    2016-08-30

    Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.

  17. Orthogonal basis with a conicoid first mode for shape specification of optical surfaces.

    PubMed

    Ferreira, Chelo; López, José L; Navarro, Rafael; Sinusía, Ester Pérez

    2016-03-07

    A rigorous and powerful theoretical framework is proposed to obtain systems of orthogonal functions (or shape modes) to represent optical surfaces. The method is general so it can be applied to different initial shapes and different polynomials. Here we present results for surfaces with circular apertures when the first basis function (mode) is a conicoid. The system for aspheres with rotational symmetry is obtained applying an appropriate change of variables to Legendre polynomials, whereas the system for general freeform case is obtained applying a similar procedure to spherical harmonics. Numerical comparisons with standard systems, such as Forbes and Zernike polynomials, are performed and discussed.

  18. Venus radar mapper attitude reference quaternion

    NASA Technical Reports Server (NTRS)

    Lyons, D. T.

    1986-01-01

    Polynomial functions of time are used to specify the components of the quaternion which represents the nominal attitude of the Venus Radar mapper spacecraft during mapping. The following constraints must be satisfied in order to obtain acceptable synthetic array radar data: the nominal attitude function must have a large dynamic range, the sensor orientation must be known very accurately, the attitude reference function must use as little memory as possible, and the spacecraft must operate autonomously. Fitting polynomials to the components of the desired quaternion function is a straightforward method for providing a very dynamic nominal attitude using a minimum amount of on-board computer resources. Although the attitude from the polynomials may not be exactly the one requested by the radar designers, the polynomial coefficients are known, so they do not contribute to the attitude uncertainty. Frequent coefficient updates are not required, so the spacecraft can operate autonomously.

  19. Fast beampattern evaluation by polynomial rooting

    NASA Astrophysics Data System (ADS)

    Häcker, P.; Uhlich, S.; Yang, B.

    2011-07-01

    Current automotive radar systems measure the distance, the relative velocity and the direction of objects in their environment. This information enables the car to support the driver. The direction estimation capabilities of a sensor array depend on its beampattern. To find the array configuration leading to the best angle estimation by a global optimization algorithm, a huge amount of beampatterns have to be calculated to detect their maxima. In this paper, a novel algorithm is proposed to find all maxima of an array's beampattern fast and reliably, leading to accelerated array optimizations. The algorithm works for arrays having the sensors on a uniformly spaced grid. We use a general version of the gcd (greatest common divisor) function in order to write the problem as a polynomial. We differentiate and root the polynomial to get the extrema of the beampattern. In addition, we show a method to reduce the computational burden even more by decreasing the order of the polynomial.

  20. Data Processing Algorithm for Diagnostics of Combustion Using Diode Laser Absorption Spectrometry.

    PubMed

    Mironenko, Vladimir R; Kuritsyn, Yuril A; Liger, Vladimir V; Bolshov, Mikhail A

    2018-02-01

    A new algorithm for the evaluation of the integral line intensity for inferring the correct value for the temperature of a hot zone in the diagnostic of combustion by absorption spectroscopy with diode lasers is proposed. The algorithm is based not on the fitting of the baseline (BL) but on the expansion of the experimental and simulated spectra in a series of orthogonal polynomials, subtracting of the first three components of the expansion from both the experimental and simulated spectra, and fitting the spectra thus modified. The algorithm is tested in the numerical experiment by the simulation of the absorption spectra using a spectroscopic database, the addition of white noise, and the parabolic BL. Such constructed absorption spectra are treated as experimental in further calculations. The theoretical absorption spectra were simulated with the parameters (temperature, total pressure, concentration of water vapor) close to the parameters used for simulation of the experimental data. Then, spectra were expanded in the series of orthogonal polynomials and first components were subtracted from both spectra. The value of the correct integral line intensities and hence the correct temperature evaluation were obtained by fitting of the thus modified experimental and simulated spectra. The dependence of the mean and standard deviation of the evaluation of the integral line intensity on the linewidth and the number of subtracted components (first two or three) were examined. The proposed algorithm provides a correct estimation of temperature with standard deviation better than 60 K (for T = 1000 K) for the line half-width up to 0.6 cm -1 . The proposed algorithm allows for obtaining the parameters of a hot zone without the fitting of usually unknown BL.

  1. Weighted Iterative Bayesian Compressive Sensing (WIBCS) for High Dimensional Polynomial Surrogate Construction

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.

    2016-12-01

    Surrogate construction has become a routine procedure when facing computationally intensive studies requiring multiple evaluations of complex models. In particular, surrogate models, otherwise called emulators or response surfaces, replace complex models in uncertainty quantification (UQ) studies, including uncertainty propagation (forward UQ) and parameter estimation (inverse UQ). Further, surrogates based on Polynomial Chaos (PC) expansions are especially convenient for forward UQ and global sensitivity analysis, also known as variance-based decomposition. However, the PC surrogate construction strongly suffers from the curse of dimensionality. With a large number of input parameters, the number of model simulations required for accurate surrogate construction is prohibitively large. Relatedly, non-adaptive PC expansions typically include infeasibly large number of basis terms far exceeding the number of available model evaluations. We develop Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth and PC surrogate construction leading to a sparse, high-dimensional PC surrogate with a very few model evaluations. The surrogate is then readily employed for global sensitivity analysis leading to further dimensionality reduction. Besides numerical tests, we demonstrate the construction on the example of Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  2. Impact of Sequential Ammonia Fiber Expansion (AFEX) Pretreatment and Pelletization on the Moisture Sorption Properties of Corn Stover

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonner, Ian J.; Thompson, David N.; Teymouri, Farzaneh

    Combining ammonia fiber expansion (AFEX™) pretreatment with a depot processing facility is a promising option for delivering high-value densified biomass to the emerging bioenergy industry. However, because the pretreatment process results in a high moisture material unsuitable for pelleting or storage (40% wet basis), the biomass must be immediately dried. If AFEX pretreatment results in a material that is difficult to dry, the economics of this already costly operation would be at risk. This work tests the nature of moisture sorption isotherms and thin-layer drying behavior of corn (Zea mays L.) stover at 20°C to 60°C before and after sequentialmore » AFEX pretreatment and pelletization to determine whether any negative impacts to material drying or storage may result from the AFEX process. The equilibrium moisture content to equilibrium relative humidity relationship for each of the materials was determined using dynamic vapor sorption isotherms and modeled with modified Chung-Pfost, modified Halsey, and modified Henderson temperature-dependent models as well as the Double Log Polynomial (DLP), Peleg, and Guggenheim Anderson de Boer (GAB) temperature-independent models. Drying kinetics were quantified under thin-layer laboratory testing and modeled using the Modified Page's equation. Water activity isotherms for non-pelleted biomass were best modeled with the Peleg temperature-independent equation while isotherms for the pelleted biomass were best modeled with the Double Log Polynomial equation. Thin-layer drying results were accurately modeled with the Modified Page's equation. The results of this work indicate that AFEX pretreatment results in drying properties more favorable than or equal to that of raw corn stover, and pellets of superior physical stability in storage.« less

  3. Sum-of-squares-based fuzzy controller design using quantum-inspired evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Gwo-Ruey; Huang, Yu-Chia; Cheng, Chih-Yung

    2016-07-01

    In the field of fuzzy control, control gains are obtained by solving stabilisation conditions in linear-matrix-inequality-based Takagi-Sugeno fuzzy control method and sum-of-squares-based polynomial fuzzy control method. However, the optimal performance requirements are not considered under those stabilisation conditions. In order to handle specific performance problems, this paper proposes a novel design procedure with regard to polynomial fuzzy controllers using quantum-inspired evolutionary algorithms. The first contribution of this paper is a combination of polynomial fuzzy control and quantum-inspired evolutionary algorithms to undertake an optimal performance controller design. The second contribution is the proposed stability condition derived from the polynomial Lyapunov function. The proposed design approach is dissimilar to the traditional approach, in which control gains are obtained by solving the stabilisation conditions. The first step of the controller design uses the quantum-inspired evolutionary algorithms to determine the control gains with the best performance. Then, the stability of the closed-loop system is analysed under the proposed stability conditions. To illustrate effectiveness and validity, the problem of balancing and the up-swing of an inverted pendulum on a cart is used.

  4. Design of hybrid radial basis function neural networks (HRBFNNs) realized with the aid of hybridization of fuzzy clustering method (FCM) and polynomial neural networks (PNNs).

    PubMed

    Huang, Wei; Oh, Sung-Kwun; Pedrycz, Witold

    2014-12-01

    In this study, we propose Hybrid Radial Basis Function Neural Networks (HRBFNNs) realized with the aid of fuzzy clustering method (Fuzzy C-Means, FCM) and polynomial neural networks. Fuzzy clustering used to form information granulation is employed to overcome a possible curse of dimensionality, while the polynomial neural network is utilized to build local models. Furthermore, genetic algorithm (GA) is exploited here to optimize the essential design parameters of the model (including fuzzification coefficient, the number of input polynomial fuzzy neurons (PFNs), and a collection of the specific subset of input PFNs) of the network. To reduce dimensionality of the input space, principal component analysis (PCA) is considered as a sound preprocessing vehicle. The performance of the HRBFNNs is quantified through a series of experiments, in which we use several modeling benchmarks of different levels of complexity (different number of input variables and the number of available data). A comparative analysis reveals that the proposed HRBFNNs exhibit higher accuracy in comparison to the accuracy produced by some models reported previously in the literature. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Secure message authentication system for node to node network

    NASA Astrophysics Data System (ADS)

    Sindhu, R.; Vanitha, M. M.; Norman, J.

    2017-10-01

    The Message verification remains some of the best actual methods for prevent the illegal and dis honored communication after presence progressed to WSNs (Wireless Sensor Networks). Intend for this purpose, several message verification systems must stand established, created on both symmetric key cryptography otherwise public key cryptosystems. Best of them will have some limits for great computational then statement above in count of deficiency of climb ability then flexibility in node settlement occurrence. In a polynomial based system was newly presented for these problems. Though, this system then situations delay will must the dimness of integral limitation firm in the point of polynomial: once the amount of message transferred remains the greater than the limitation then the opponent will completely improve the polynomial approaches. This paper suggests using ECC (Elliptic Curve Cryptography). Though using the node verification the technique in this paper permits some nodes to transfer a limitless amount of messages lacking misery in the limit problem. This system will have the message cause secrecy. Equally theoretic study then model effects show our planned system will be effective than the polynomial based method in positions of calculation then statement above in privacy points though message basis privacy.

  6. Thermodynamic characterization of networks using graph polynomials

    NASA Astrophysics Data System (ADS)

    Ye, Cheng; Comin, César H.; Peron, Thomas K. DM.; Silva, Filipi N.; Rodrigues, Francisco A.; Costa, Luciano da F.; Torsello, Andrea; Hancock, Edwin R.

    2015-09-01

    In this paper, we present a method for characterizing the evolution of time-varying complex networks by adopting a thermodynamic representation of network structure computed from a polynomial (or algebraic) characterization of graph structure. Commencing from a representation of graph structure based on a characteristic polynomial computed from the normalized Laplacian matrix, we show how the polynomial is linked to the Boltzmann partition function of a network. This allows us to compute a number of thermodynamic quantities for the network, including the average energy and entropy. Assuming that the system does not change volume, we can also compute the temperature, defined as the rate of change of entropy with energy. All three thermodynamic variables can be approximated using low-order Taylor series that can be computed using the traces of powers of the Laplacian matrix, avoiding explicit computation of the normalized Laplacian spectrum. These polynomial approximations allow a smoothed representation of the evolution of networks to be constructed in the thermodynamic space spanned by entropy, energy, and temperature. We show how these thermodynamic variables can be computed in terms of simple network characteristics, e.g., the total number of nodes and node degree statistics for nodes connected by edges. We apply the resulting thermodynamic characterization to real-world time-varying networks representing complex systems in the financial and biological domains. The study demonstrates that the method provides an efficient tool for detecting abrupt changes and characterizing different stages in network evolution.

  7. Three-dimensional trend mapping from wire-line logs

    USGS Publications Warehouse

    Doveton, J.H.; Ke-an, Z.

    1985-01-01

    Mapping of lithofacies and porosities of stratigraphic units is complicated because these properties vary in three dimensions. The method of moments was proposed by Krumbein and Libby (1957) as a technique to aid in resolving this problem. Moments are easily computed from wireline logs and are simple statistics which summarize vertical variation in a log trace. Combinations of moment maps have proved useful in understanding vertical and lateral changes in lithology of sedimentary rock units. Although moments have meaning both as statistical descriptors and as mechanical properties, they also define polynomial curves which approximate lithologic changes as a function of depth. These polynomials can be fitted by least-squares methods, partitioning major trends in rock properties from finescale fluctuations. Analysis of variance yields the degree of fit of any polynomial and measures the proportion of vertical variability expressed by any moment or combination of moments. In addition, polynomial curves can be differentiated to determine depths at which pronounced expressions of facies occur and to determine the locations of boundaries between major lithologic subdivisions. Moments can be estimated at any location in an area by interpolating from log moments at control wells. A matrix algebra operation then converts moment estimates to coefficients of a polynomial function which describes a continuous curve of lithologic variation with depth. If this procedure is applied to a grid of geographic locations, the result is a model of variability in three dimensions. Resolution of the model is determined largely by number of moments used in its generation. The method is illustrated with an analysis of lithofacies in the Simpson Group of south-central Kansas; the three-dimensional model is shown as cross sections and slice maps. In this study, the gamma-ray log is used as a measure of shaliness of the unit. However, the method is general and can be applied, for example, to suites of neutron, density, or sonic logs to produce three-dimensional models of porosity in reservoir rocks. ?? 1985 Plenum Publishing Corporation.

  8. Field curvature correction method for ultrashort throw ratio projection optics design using an odd polynomial mirror surface.

    PubMed

    Zhuang, Zhenfeng; Chen, Yanting; Yu, Feihong; Sun, Xiaowei

    2014-08-01

    This paper presents a field curvature correction method of designing an ultrashort throw ratio (TR) projection lens for an imaging system. The projection lens is composed of several refractive optical elements and an odd polynomial mirror surface. A curved image is formed in a direction away from the odd polynomial mirror surface by the refractive optical elements from the image formed on the digital micromirror device (DMD) panel, and the curved image formed is its virtual image. Then the odd polynomial mirror surface enlarges the curved image and a plane image is formed on the screen. Based on the relationship between the chief ray from the exit pupil of each field of view (FOV) and the corresponding predescribed position on the screen, the initial profile of the freeform mirror surface is calculated by using segments of the hyperbolic according to the laws of reflection. For further optimization, the value of the high-order odd polynomial surface is used to express the freeform mirror surface through a least-squares fitting method. As an example, an ultrashort TR projection lens that realizes projection onto a large 50 in. screen at a distance of only 510 mm is presented. The optical performance for the designed projection lens is analyzed by ray tracing method. Results show that an ultrashort TR projection lens modulation transfer function of over 60% at 0.5 cycles/mm for all optimization fields is achievable with f-number of 2.0, 126° full FOV, <1% distortion, and 0.46 TR. Moreover, in comparing the proposed projection lens' optical specifications to that of traditional projection lenses, aspheric mirror projection lenses, and conventional short TR projection lenses, results indicate that this projection lens has the advantages of ultrashort TR, low f-number, wide full FOV, and small distortion.

  9. Factorizing the factorization - a spectral-element solver for elliptic equations with linear operation count

    NASA Astrophysics Data System (ADS)

    Huismann, Immo; Stiller, Jörg; Fröhlich, Jochen

    2017-10-01

    The paper proposes a novel factorization technique for static condensation of a spectral-element discretization matrix that yields a linear operation count of just 13N multiplications for the residual evaluation, where N is the total number of unknowns. In comparison to previous work it saves a factor larger than 3 and outpaces unfactored variants for all polynomial degrees. Using the new technique as a building block for a preconditioned conjugate gradient method yields linear scaling of the runtime with N which is demonstrated for polynomial degrees from 2 to 32. This makes the spectral-element method cost effective even for low polynomial degrees. Moreover, the dependence of the iterative solution on the element aspect ratio is addressed, showing only a slight increase in the number of iterations for aspect ratios up to 128. Hence, the solver is very robust for practical applications.

  10. Solution of the nonlinear mixed Volterra-Fredholm integral equations by hybrid of block-pulse functions and Bernoulli polynomials.

    PubMed

    Mashayekhi, S; Razzaghi, M; Tripak, O

    2014-01-01

    A new numerical method for solving the nonlinear mixed Volterra-Fredholm integral equations is presented. This method is based upon hybrid functions approximation. The properties of hybrid functions consisting of block-pulse functions and Bernoulli polynomials are presented. The operational matrices of integration and product are given. These matrices are then utilized to reduce the nonlinear mixed Volterra-Fredholm integral equations to the solution of algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique.

  11. Solution of the Nonlinear Mixed Volterra-Fredholm Integral Equations by Hybrid of Block-Pulse Functions and Bernoulli Polynomials

    PubMed Central

    Mashayekhi, S.; Razzaghi, M.; Tripak, O.

    2014-01-01

    A new numerical method for solving the nonlinear mixed Volterra-Fredholm integral equations is presented. This method is based upon hybrid functions approximation. The properties of hybrid functions consisting of block-pulse functions and Bernoulli polynomials are presented. The operational matrices of integration and product are given. These matrices are then utilized to reduce the nonlinear mixed Volterra-Fredholm integral equations to the solution of algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique. PMID:24523638

  12. A modified interval symmetric single step procedure ISS-5D for simultaneous inclusion of polynomial zeros

    NASA Astrophysics Data System (ADS)

    Sham, Atiyah W. M.; Monsi, Mansor; Hassan, Nasruddin; Suleiman, Mohamed

    2013-04-01

    The aim of this paper is to present a new modified interval symmetric single-step procedure ISS-5D which is the extension from the previous procedure, ISS1. The ISS-5D method will produce successively smaller intervals that are guaranteed to still contain the zeros. The efficiency of this method is measured on the CPU times and the number of iteration. The procedure is run on five test polynomials and the results obtained are shown in this paper.

  13. Axisymmetric solid elements by a rational hybrid stress method

    NASA Technical Reports Server (NTRS)

    Tian, Z.; Pian, T. H. H.

    1985-01-01

    Four-node axisymmetric solid elements are derived by a new version of hybrid method for which the assumed stresses are expressed in complete polynomials in natural coordinates. The stress equilibrium conditions are introduced through the use of additional displacements as Lagrange multipliers. A rational procedure is to choose the displacement terms such that the resulting strains are also of complete polynomials of the same order. Example problems all indicate that elements obtained by this procedure lead to better results in displacements and stresses than that by other finite elements.

  14. Operational method of solution of linear non-integer ordinary and partial differential equations.

    PubMed

    Zhukovsky, K V

    2016-01-01

    We propose operational method with recourse to generalized forms of orthogonal polynomials for solution of a variety of differential equations of mathematical physics. Operational definitions of generalized families of orthogonal polynomials are used in this context. Integral transforms and the operational exponent together with some special functions are also employed in the solutions. The examples of solution of physical problems, related to such problems as the heat propagation in various models, evolutional processes, Black-Scholes-like equations etc. are demonstrated by the operational technique.

  15. Hermite regularization of the lattice Boltzmann method for open source computational aeroacoustics.

    PubMed

    Brogi, F; Malaspinas, O; Chopard, B; Bonadonna, C

    2017-10-01

    The lattice Boltzmann method (LBM) is emerging as a powerful engineering tool for aeroacoustic computations. However, the LBM has been shown to present accuracy and stability issues in the medium-low Mach number range, which is of interest for aeroacoustic applications. Several solutions have been proposed but are often too computationally expensive, do not retain the simplicity and the advantages typical of the LBM, or are not described well enough to be usable by the community due to proprietary software policies. An original regularized collision operator is proposed, based on the expansion of Hermite polynomials, that greatly improves the accuracy and stability of the LBM without significantly altering its algorithm. The regularized LBM can be easily coupled with both non-reflective boundary conditions and a multi-level grid strategy, essential ingredients for aeroacoustic simulations. Excellent agreement was found between this approach and both experimental and numerical data on two different benchmarks: the laminar, unsteady flow past a 2D cylinder and the 3D turbulent jet. Finally, most of the aeroacoustic computations with LBM have been done with commercial software, while here the entire theoretical framework is implemented using an open source library (palabos).

  16. Fast Implicit Methods For Elliptic Moving Interface Problems

    DTIC Science & Technology

    2015-12-11

    analyzed, and tested for the Fourier transform of piecewise polynomials given on d-dimensional simplices in D-dimensional Euclidean space. These transforms...evaluation, and one to three orders of magnitude slower than the classical uniform Fast Fourier Transform. Second, bilinear quadratures ---which...a fast algorithm was derived, analyzed, and tested for the Fourier transform of pi ecewise polynomials given on d-dimensional simplices in D

  17. A frequency domain global parameter estimation method for multiple reference frequency response measurements

    NASA Astrophysics Data System (ADS)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.

  18. Efficient Jacobi-Gauss collocation method for solving initial value problems of Bratu type

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Bhrawy, A. H.; Baleanu, D.; Hafez, R. M.

    2013-09-01

    In this paper, we propose the shifted Jacobi-Gauss collocation spectral method for solving initial value problems of Bratu type, which is widely applicable in fuel ignition of the combustion theory and heat transfer. The spatial approximation is based on shifted Jacobi polynomials J {/n (α,β)}( x) with α, β ∈ (-1, ∞), x ∈ [0, 1] and n the polynomial degree. The shifted Jacobi-Gauss points are used as collocation nodes. Illustrative examples have been discussed to demonstrate the validity and applicability of the proposed technique. Comparing the numerical results of the proposed method with some well-known results show that the method is efficient and gives excellent numerical results.

  19. Polynomial decay rate of a thermoelastic Mindlin-Timoshenko plate model with Dirichlet boundary conditions

    NASA Astrophysics Data System (ADS)

    Grobbelaar-Van Dalsen, Marié

    2015-02-01

    In this article, we are concerned with the polynomial stabilization of a two-dimensional thermoelastic Mindlin-Timoshenko plate model with no mechanical damping. The model is subject to Dirichlet boundary conditions on the elastic as well as the thermal variables. The work complements our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 64:1305-1325, 2013) on the polynomial stabilization of a Mindlin-Timoshenko model in a radially symmetric domain under Dirichlet boundary conditions on the displacement and thermal variables and free boundary conditions on the shear angle variables. In particular, our aim is to investigate the effect of the Dirichlet boundary conditions on all the variables on the polynomial decay rate of the model. By once more applying a frequency domain method in which we make critical use of an inequality for the trace of Sobolev functions on the boundary of a bounded, open connected set we show that the decay is slower than in the model considered in the cited work. A comparison of our result with our polynomial decay result for a magnetoelastic Mindlin-Timoshenko model subject to Dirichlet boundary conditions on the elastic variables in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) also indicates a correlation between the robustness of the coupling between parabolic and hyperbolic dynamics and the polynomial decay rate in the two models.

  20. A BiCGStab2 variant of the IDR(s) method for solving linear equations

    NASA Astrophysics Data System (ADS)

    Abe, Kuniyoshi; Sleijpen, Gerard L. G.

    2012-09-01

    The hybrid Bi-Conjugate Gradient (Bi-CG) methods, such as the BiCG STABilized (BiCGSTAB), BiCGstab(l), BiCGStab2 and BiCG×MR2 methods are well-known solvers for solving a linear equation with a nonsymmetric matrix. The Induced Dimension Reduction (IDR)(s) method has recently been proposed, and it has been reported that IDR(s) is often more effective than the hybrid BiCG methods. IDR(s) combining the stabilization polynomial of BiCGstab(l) has been designed to improve the convergence of the original IDR(s) method. We therefore propose IDR(s) combining the stabilization polynomial of BiCGStab2. Numerical experiments show that our proposed variant of IDR(s) is more effective than the original IDR(s) and BiCGStab2 methods.

Top