Hong, X; Harris, C J
2000-01-01
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.
NASA Astrophysics Data System (ADS)
Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.
2008-10-01
We study the asymptotic behavior of the zeros of a sequence of polynomials whose weighted norms, with respect to a sequence of weight functions, have the same nth root asymptotic behavior as the weighted norms of certain extremal polynomials. This result is applied to obtain the (contracted) weak zero distribution for orthogonal polynomials with respect to a Sobolev inner product with exponential weights of the form e-[phi](x), giving a unified treatment for the so-called Freud (i.e., when [phi] has polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) cases. In addition, we provide a new proof for the bound of the distance of the zeros to the convex hull of the support for these Sobolev orthogonal polynomials.
Mixed kernel function support vector regression for global sensitivity analysis
NASA Astrophysics Data System (ADS)
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
Non-Abelian integrable hierarchies: matrix biorthogonal polynomials and perturbations
NASA Astrophysics Data System (ADS)
Ariznabarreta, Gerardo; García-Ardila, Juan C.; Mañas, Manuel; Marcellán, Francisco
2018-05-01
In this paper, Geronimus–Uvarov perturbations for matrix orthogonal polynomials on the real line are studied and then applied to the analysis of non-Abelian integrable hierarchies. The orthogonality is understood in full generality, i.e. in terms of a nondegenerate continuous sesquilinear form, determined by a quasidefinite matrix of bivariate generalized functions with a well-defined support. We derive Christoffel-type formulas that give the perturbed matrix biorthogonal polynomials and their norms in terms of the original ones. The keystone for this finding is the Gauss–Borel factorization of the Gram matrix. Geronimus–Uvarov transformations are considered in the context of the 2D non-Abelian Toda lattice and noncommutative KP hierarchies. The interplay between transformations and integrable flows is discussed. Miwa shifts, τ-ratio matrix functions and Sato formulas are given. Bilinear identities, involving Geronimus–Uvarov transformations, first for the Baker functions, then secondly for the biorthogonal polynomials and its second kind functions, and finally for the τ-ratio matrix functions, are found.
A recursive algorithm for Zernike polynomials
NASA Technical Reports Server (NTRS)
Davenport, J. W.
1982-01-01
The analysis of a function defined on a rotationally symmetric system, with either a circular or annular pupil is discussed. In order to numerically analyze such systems it is typical to expand the given function in terms of a class of orthogonal polynomials. Because of their particular properties, the Zernike polynomials are especially suited for numerical calculations. Developed is a recursive algorithm that can be used to generate the Zernike polynomials up to a given order. The algorithm is recursively defined over J where R(J,N) is the Zernike polynomial of degree N obtained by orthogonalizing the sequence R(J), R(J+2), ..., R(J+2N) over (epsilon, 1). The terms in the preceding row - the (J-1) row - up to the N+1 term is needed for generating the (J,N)th term. Thus, the algorith generates an upper left-triangular table. This algorithm was placed in the computer with the necessary support program also included.
Functionals of Gegenbauer polynomials and D-dimensional hydrogenic momentum expectation values
NASA Astrophysics Data System (ADS)
Van Assche, W.; Yáñez, R. J.; González-Férez, R.; Dehesa, Jesús S.
2000-09-01
The system of Gegenbauer or ultraspherical polynomials {Cnλ(x);n=0,1,…} is a classical family of polynomials orthogonal with respect to the weight function ωλ(x)=(1-x2)λ-1/2 on the support interval [-1,+1]. Integral functionals of Gegenbauer polynomials with integrand f(x)[Cnλ(x)]2ωλ(x), where f(x) is an arbitrary function which does not depend on n or λ, are considered in this paper. First, a general recursion formula for these functionals is obtained. Then, the explicit expression for some specific functionals of this type is found in a closed and compact form; namely, for the functionals with f(x) equal to (1-x)α(1+x)β, log(1-x2), and (1+x)log(1+x), which appear in numerous physico-mathematical problems. Finally, these functionals are used in the explicit evaluation of the momentum expectation values
and are given by means of a terminating 5F4 hypergeometric function with unit argument, which is a considerable improvement with respect to Hey's expression (the only one existing up to now) which requires a double sum.
Approximating exponential and logarithmic functions using polynomial interpolation
NASA Astrophysics Data System (ADS)
Gordon, Sheldon P.; Yang, Yajun
2017-04-01
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.
Papadopoulos, Anthony
2009-01-01
The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.
Algebraic special functions and SO(3,2)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Celeghini, E., E-mail: celeghini@fi.infn.it; Olmo, M.A. del, E-mail: olmo@fta.uva.es
2013-06-15
A ladder structure of operators is presented for the associated Legendre polynomials and the sphericas harmonics. In both cases these operators belong to the irreducible representation of the Lie algebra so(3,2) with quadratic Casimir equals to −5/4. As both are also bases of square-integrable functions, the universal enveloping algebra of so(3,2) is thus shown to be homomorphic to the space of linear operators acting on the L{sup 2} functions defined on (−1,1)×Z and on the sphere S{sup 2}, respectively. The presence of a ladder structure is suggested to be the general condition to obtain a Lie algebra representation defining inmore » this way the “algebraic special functions” that are proposed to be the connection between Lie algebras and square-integrable functions so that the space of linear operators on the L{sup 2} functions is homomorphic to the universal enveloping algebra. The passage to the group, by means of the exponential map, shows that the associated Legendre polynomials and the spherical harmonics support the corresponding unitary irreducible representation of the group SO(3,2). -- Highlights: •The algebraic ladder structure is constructed for the associated Legendre polynomials (ALP). •ALP and spherical harmonics support a unitary irreducible SO(3,2)-representation. •A ladder structure is the condition to get a Lie group representation defining “algebraic special functions”. •The “algebraic special functions” connect Lie algebras and L{sup 2} functions.« less
Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Yang, Yajun
2017-01-01
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…
Luigi Gatteschi's work on asymptotics of special functions and their zeros
NASA Astrophysics Data System (ADS)
Gautschi, Walter; Giordano, Carla
2008-12-01
A good portion of Gatteschi's research publications-about 65%-is devoted to asymptotics of special functions and their zeros. Most prominently among the special functions studied figure classical orthogonal polynomials, notably Jacobi polynomials and their special cases, Laguerre polynomials, and Hermite polynomials by implication. Other important classes of special functions dealt with are Bessel functions of the first and second kind, Airy functions, and confluent hypergeometric functions, both in Tricomi's and Whittaker's form. This work is reviewed here, and organized along methodological lines.
Fast beampattern evaluation by polynomial rooting
NASA Astrophysics Data System (ADS)
Häcker, P.; Uhlich, S.; Yang, B.
2011-07-01
Current automotive radar systems measure the distance, the relative velocity and the direction of objects in their environment. This information enables the car to support the driver. The direction estimation capabilities of a sensor array depend on its beampattern. To find the array configuration leading to the best angle estimation by a global optimization algorithm, a huge amount of beampatterns have to be calculated to detect their maxima. In this paper, a novel algorithm is proposed to find all maxima of an array's beampattern fast and reliably, leading to accelerated array optimizations. The algorithm works for arrays having the sensors on a uniformly spaced grid. We use a general version of the gcd (greatest common divisor) function in order to write the problem as a polynomial. We differentiate and root the polynomial to get the extrema of the beampattern. In addition, we show a method to reduce the computational burden even more by decreasing the order of the polynomial.
Gabor-based kernel PCA with fractional power polynomial models for face recognition.
Liu, Chengjun
2004-05-01
This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.
Riemann-Liouville Fractional Calculus of Certain Finite Class of Classical Orthogonal Polynomials
NASA Astrophysics Data System (ADS)
Malik, Pradeep; Swaminathan, A.
2010-11-01
In this work we consider certain class of classical orthogonal polynomials defined on the positive real line. These polynomials have their weight function related to the probability density function of F distribution and are finite in number up to orthogonality. We generalize these polynomials for fractional order by considering the Riemann-Liouville type operator on these polynomials. Various properties like explicit representation in terms of hypergeometric functions, differential equations, recurrence relations are derived.
Stabilisation of discrete-time polynomial fuzzy systems via a polynomial lyapunov approach
NASA Astrophysics Data System (ADS)
Nasiri, Alireza; Nguang, Sing Kiong; Swain, Akshya; Almakhles, Dhafer
2018-02-01
This paper deals with the problem of designing a controller for a class of discrete-time nonlinear systems which is represented by discrete-time polynomial fuzzy model. Most of the existing control design methods for discrete-time fuzzy polynomial systems cannot guarantee their Lyapunov function to be a radially unbounded polynomial function, hence the global stability cannot be assured. The proposed control design in this paper guarantees a radially unbounded polynomial Lyapunov functions which ensures global stability. In the proposed design, state feedback structure is considered and non-convexity problem is solved by incorporating an integrator into the controller. Sufficient conditions of stability are derived in terms of polynomial matrix inequalities which are solved via SOSTOOLS in MATLAB. A numerical example is presented to illustrate the effectiveness of the proposed controller.
Tutte polynomial in functional magnetic resonance imaging
NASA Astrophysics Data System (ADS)
García-Castillón, Marlly V.
2015-09-01
Methods of graph theory are applied to the processing of functional magnetic resonance images. Specifically the Tutte polynomial is used to analyze such kind of images. Functional Magnetic Resonance Imaging provide us connectivity networks in the brain which are represented by graphs and the Tutte polynomial will be applied. The problem of computing the Tutte polynomial for a given graph is #P-hard even for planar graphs. For a practical application the maple packages "GraphTheory" and "SpecialGraphs" will be used. We will consider certain diagram which is depicting functional connectivity, specifically between frontal and posterior areas, in autism during an inferential text comprehension task. The Tutte polynomial for the resulting neural networks will be computed and some numerical invariants for such network will be obtained. Our results show that the Tutte polynomial is a powerful tool to analyze and characterize the networks obtained from functional magnetic resonance imaging.
Interpolation and Polynomial Curve Fitting
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2014-01-01
Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…
A note on the zeros of Freud-Sobolev orthogonal polynomials
NASA Astrophysics Data System (ADS)
Moreno-Balcazar, Juan J.
2007-10-01
We prove that the zeros of a certain family of Sobolev orthogonal polynomials involving the Freud weight function e-x4 on are real, simple, and interlace with the zeros of the Freud polynomials, i.e., those polynomials orthogonal with respect to the weight function e-x4. Some numerical examples are shown.
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less
Symmetric polynomials in information theory: Entropy and subentropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jozsa, Richard; Mitchison, Graeme
2015-06-15
Entropy and other fundamental quantities of information theory are customarily expressed and manipulated as functions of probabilities. Here we study the entropy H and subentropy Q as functions of the elementary symmetric polynomials in the probabilities and reveal a series of remarkable properties. Derivatives of all orders are shown to satisfy a complete monotonicity property. H and Q themselves become multivariate Bernstein functions and we derive the density functions of their Levy-Khintchine representations. We also show that H and Q are Pick functions in each symmetric polynomial variable separately. Furthermore, we see that H and the intrinsically quantum informational quantitymore » Q become surprisingly closely related in functional form, suggesting a special significance for the symmetric polynomials in quantum information theory. Using the symmetric polynomials, we also derive a series of further properties of H and Q.« less
Modeling Uncertainty in Steady State Diffusion Problems via Generalized Polynomial Chaos
2002-07-25
Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc., AMS... orthogonal polynomial functionals from the Askey scheme, as a generalization of the original polynomial chaos idea of Wiener (1938). A Galerkin projection...1) by generalized polynomial chaos expansion, where the uncertainties can be introduced through κ, f , or g, or some combinations. It is worth
NASA Astrophysics Data System (ADS)
Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.
2014-10-01
In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.
Constructing general partial differential equations using polynomial and neural networks.
Zjavka, Ladislav; Pedrycz, Witold
2016-01-01
Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Doha, E. H.
2003-05-01
A formula expressing the Laguerre coefficients of a general-order derivative of an infinitely differentiable function in terms of its original coefficients is proved, and a formula expressing explicitly the derivatives of Laguerre polynomials of any degree and for any order as a linear combination of suitable Laguerre polynomials is deduced. A formula for the Laguerre coefficients of the moments of one single Laguerre polynomial of certain degree is given. Formulae for the Laguerre coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Laguerre coefficients are also obtained. A simple approach in order to build and solve recursively for the connection coefficients between Jacobi-Laguerre and Hermite-Laguerre polynomials is described. An explicit formula for these coefficients between Jacobi and Laguerre polynomials is given, of which the ultra-spherical polynomials of the first and second kinds and Legendre polynomials are important special cases. An analytical formula for the connection coefficients between Hermite and Laguerre polynomials is also obtained.
ERIC Educational Resources Information Center
Schweizer, Karl
2006-01-01
A model with fixed relations between manifest and latent variables is presented for investigating choice reaction time data. The numbers for fixation originate from the polynomial function. Two options are considered: the component-based (1 latent variable for each component of the polynomial function) and composite-based options (1 latent…
NASA Astrophysics Data System (ADS)
Doha, E. H.
2002-02-01
An analytical formula expressing the ultraspherical coefficients of an expansion for an infinitely differentiable function that has been integrated an arbitrary number of times in terms of the coefficients of the original expansion of the function is stated in a more compact form and proved in a simpler way than the formula suggested by Phillips and Karageorghis (27 (1990) 823). A new formula expressing explicitly the integrals of ultraspherical polynomials of any degree that has been integrated an arbitrary number of times of ultraspherical polynomials is given. The tensor product of ultraspherical polynomials is used to approximate a function of more than one variable. Formulae expressing the coefficients of differentiated expansions of double and triple ultraspherical polynomials in terms of the original expansion are stated and proved. Some applications of how to use ultraspherical polynomials for solving ordinary and partial differential equations are described.
Frequency domain system identification methods - Matrix fraction description approach
NASA Technical Reports Server (NTRS)
Horta, Luca G.; Juang, Jer-Nan
1993-01-01
This paper presents the use of matrix fraction descriptions for least-squares curve fitting of the frequency spectra to compute two matrix polynomials. The matrix polynomials are intermediate step to obtain a linearized representation of the experimental transfer function. Two approaches are presented: first, the matrix polynomials are identified using an estimated transfer function; second, the matrix polynomials are identified directly from the cross/auto spectra of the input and output signals. A set of Markov parameters are computed from the polynomials and subsequently realization theory is used to recover a minimum order state space model. Unevenly spaced frequency response functions may be used. Results from a simple numerical example and an experiment are discussed to highlight some of the important aspect of the algorithm.
NASA Astrophysics Data System (ADS)
Miller, W., Jr.; Li, Q.
2015-04-01
The Wilson and Racah polynomials can be characterized as basis functions for irreducible representations of the quadratic symmetry algebra of the quantum superintegrable system on the 2-sphere, HΨ = EΨ, with generic 3-parameter potential. Clearly, the polynomials are expansion coefficients for one eigenbasis of a symmetry operator L2 of H in terms of an eigenbasis of another symmetry operator L1, but the exact relationship appears not to have been made explicit. We work out the details of the expansion to show, explicitly, how the polynomials arise and how the principal properties of these functions: the measure, 3-term recurrence relation, 2nd order difference equation, duality of these relations, permutation symmetry, intertwining operators and an alternate derivation of Wilson functions - follow from the symmetry of this quantum system. This paper is an exercise to show that quantum mechancal concepts and recurrence relations for Gausian hypergeometrc functions alone suffice to explain these properties; we make no assumptions about the structure of Wilson polynomial/functions, but derive them from quantum principles. There is active interest in the relation between multivariable Wilson polynomials and the quantum superintegrable system on the n-sphere with generic potential, and these results should aid in the generalization. Contracting function space realizations of irreducible representations of this quadratic algebra to the other superintegrable systems one can obtain the full Askey scheme of orthogonal hypergeometric polynomials. All of these contractions of superintegrable systems with potential are uniquely induced by Wigner Lie algebra contractions of so(3, C) and e(2,C). All of the polynomials produced are interpretable as quantum expansion coefficients. It is important to extend this process to higher dimensions.
Zhao, Chunyu; Burge, James H
2007-12-24
Zernike polynomials provide a well known, orthogonal set of scalar functions over a circular domain, and are commonly used to represent wavefront phase or surface irregularity. A related set of orthogonal functions is given here which represent vector quantities, such as mapping distortion or wavefront gradient. These functions are generated from gradients of Zernike polynomials, made orthonormal using the Gram- Schmidt technique. This set provides a complete basis for representing vector fields that can be defined as a gradient of some scalar function. It is then efficient to transform from the coefficients of the vector functions to the scalar Zernike polynomials that represent the function whose gradient was fit. These new vector functions have immediate application for fitting data from a Shack-Hartmann wavefront sensor or for fitting mapping distortion for optical testing. A subsequent paper gives an additional set of vector functions consisting only of rotational terms with zero divergence. The two sets together provide a complete basis that can represent all vector distributions in a circular domain.
A Geometric Method for Model Reduction of Biochemical Networks with Polynomial Rate Functions.
Samal, Satya Swarup; Grigoriev, Dima; Fröhlich, Holger; Weber, Andreas; Radulescu, Ovidiu
2015-12-01
Model reduction of biochemical networks relies on the knowledge of slow and fast variables. We provide a geometric method, based on the Newton polytope, to identify slow variables of a biochemical network with polynomial rate functions. The gist of the method is the notion of tropical equilibration that provides approximate descriptions of slow invariant manifolds. Compared to extant numerical algorithms such as the intrinsic low-dimensional manifold method, our approach is symbolic and utilizes orders of magnitude instead of precise values of the model parameters. Application of this method to a large collection of biochemical network models supports the idea that the number of dynamical variables in minimal models of cell physiology can be small, in spite of the large number of molecular regulatory actors.
Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan
2012-01-01
Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.
The Translated Dowling Polynomials and Numbers.
Mangontarum, Mahid M; Macodi-Ringia, Amila P; Abdulcarim, Normalah S
2014-01-01
More properties for the translated Whitney numbers of the second kind such as horizontal generating function, explicit formula, and exponential generating function are proposed. Using the translated Whitney numbers of the second kind, we will define the translated Dowling polynomials and numbers. Basic properties such as exponential generating functions and explicit formula for the translated Dowling polynomials and numbers are obtained. Convexity, integral representation, and other interesting identities are also investigated and presented. We show that the properties obtained are generalizations of some of the known results involving the classical Bell polynomials and numbers. Lastly, we established the Hankel transform of the translated Dowling numbers.
Classification of Phylogenetic Profiles for Protein Function Prediction: An SVM Approach
NASA Astrophysics Data System (ADS)
Kotaru, Appala Raju; Joshi, Ramesh C.
Predicting the function of an uncharacterized protein is a major challenge in post-genomic era due to problems complexity and scale. Having knowledge of protein function is a crucial link in the development of new drugs, better crops, and even the development of biochemicals such as biofuels. Recently numerous high-throughput experimental procedures have been invented to investigate the mechanisms leading to the accomplishment of a protein’s function and Phylogenetic profile is one of them. Phylogenetic profile is a way of representing a protein which encodes evolutionary history of proteins. In this paper we proposed a method for classification of phylogenetic profiles using supervised machine learning method, support vector machine classification along with radial basis function as kernel for identifying functionally linked proteins. We experimentally evaluated the performance of the classifier with the linear kernel, polynomial kernel and compared the results with the existing tree kernel. In our study we have used proteins of the budding yeast saccharomyces cerevisiae genome. We generated the phylogenetic profiles of 2465 yeast genes and for our study we used the functional annotations that are available in the MIPS database. Our experiments show that the performance of the radial basis kernel is similar to polynomial kernel is some functional classes together are better than linear, tree kernel and over all radial basis kernel outperformed the polynomial kernel, linear kernel and tree kernel. In analyzing these results we show that it will be feasible to make use of SVM classifier with radial basis function as kernel to predict the gene functionality using phylogenetic profiles.
Imaging characteristics of Zernike and annular polynomial aberrations.
Mahajan, Virendra N; Díaz, José Antonio
2013-04-01
The general equations for the point-spread function (PSF) and optical transfer function (OTF) are given for any pupil shape, and they are applied to optical imaging systems with circular and annular pupils. The symmetry properties of the PSF, the real and imaginary parts of the OTF, and the modulation transfer function (MTF) of a system with a circular pupil aberrated by a Zernike circle polynomial aberration are derived. The interferograms and PSFs are illustrated for some typical polynomial aberrations with a sigma value of one wave, and 3D PSFs and MTFs are shown for 0.1 wave. The Strehl ratio is also calculated for polynomial aberrations with a sigma value of 0.1 wave, and shown to be well estimated from the sigma value. The numerical results are compared with the corresponding results in the literature. Because of the same angular dependence of the corresponding annular and circle polynomial aberrations, the symmetry properties of systems with annular pupils aberrated by an annular polynomial aberration are the same as those for a circular pupil aberrated by a corresponding circle polynomial aberration. They are also illustrated with numerical examples.
Lifting q-difference operators for Askey-Wilson polynomials and their weight function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atakishiyeva, M. K.; Atakishiyev, N. M., E-mail: natig_atakishiyev@hotmail.com
2011-06-15
We determine an explicit form of a q-difference operator that transforms the continuous q-Hermite polynomials H{sub n}(x | q) of Rogers into the Askey-Wilson polynomials p{sub n}(x; a, b, c, d | q) on the top level in the Askey q-scheme. This operator represents a special convolution-type product of four one-parameter q-difference operators of the form {epsilon}{sub q}(c{sub q}D{sub q}) (where c{sub q} are some constants), defined as Exton's q-exponential function {epsilon}{sub q}(z) in terms of the Askey-Wilson divided q-difference operator D{sub q}. We also determine another q-difference operator that lifts the orthogonality weight function for the continuous q-Hermite polynomialsH{submore » n}(x | q) up to the weight function, associated with the Askey-Wilson polynomials p{sub n}(x; a, b, c, d | q).« less
Animating Nested Taylor Polynomials to Approximate a Function
ERIC Educational Resources Information Center
Mazzone, Eric F.; Piper, Bruce R.
2010-01-01
The way that Taylor polynomials approximate functions can be demonstrated by moving the center point while keeping the degree fixed. These animations are particularly nice when the Taylor polynomials do not intersect and form a nested family. We prove a result that shows when this nesting occurs. The animations can be shown in class or…
NASA Astrophysics Data System (ADS)
Doha, E. H.
2004-01-01
Formulae expressing explicitly the Jacobi coefficients of a general-order derivative (integral) of an infinitely differentiable function in terms of its original expansion coefficients, and formulae for the derivatives (integrals) of Jacobi polynomials in terms of Jacobi polynomials themselves are stated. A formula for the Jacobi coefficients of the moments of one single Jacobi polynomial of certain degree is proved. Another formula for the Jacobi coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its original expanded coefficients is also given. A simple approach in order to construct and solve recursively for the connection coefficients between Jacobi-Jacobi polynomials is described. Explicit formulae for these coefficients between ultraspherical and Jacobi polynomials are deduced, of which the Chebyshev polynomials of the first and second kinds and Legendre polynomials are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Jacobi and Hermite-Jacobi are developed.
Orbifold E-functions of dual invertible polynomials
NASA Astrophysics Data System (ADS)
Ebeling, Wolfgang; Gusein-Zade, Sabir M.; Takahashi, Atsushi
2016-08-01
An invertible polynomial is a weighted homogeneous polynomial with the number of monomials coinciding with the number of variables and such that the weights of the variables and the quasi-degree are well defined. In the framework of the search for mirror symmetric orbifold Landau-Ginzburg models, P. Berglund and M. Henningson considered a pair (f , G) consisting of an invertible polynomial f and an abelian group G of its symmetries together with a dual pair (f ˜ , G ˜) . We consider the so-called orbifold E-function of such a pair (f , G) which is a generating function for the exponents of the monodromy action on an orbifold version of the mixed Hodge structure on the Milnor fibre of f. We prove that the orbifold E-functions of Berglund-Henningson dual pairs coincide up to a sign depending on the number of variables and a simple change of variables. The proof is based on a relation between monomials (say, elements of a monomial basis of the Milnor algebra of an invertible polynomial) and elements of the whole symmetry group of the dual polynomial.
NASA Astrophysics Data System (ADS)
Abd-Elhameed, W. M.
2017-07-01
In this paper, a new formula relating Jacobi polynomials of arbitrary parameters with the squares of certain fractional Jacobi functions is derived. The derived formula is expressed in terms of a certain terminating hypergeometric function of the type _4F3(1) . With the aid of some standard reduction formulae such as Pfaff-Saalschütz's and Watson's identities, the derived formula can be reduced in simple forms which are free of any hypergeometric functions for certain choices of the involved parameters of the Jacobi polynomials and the Jacobi functions. Some other simplified formulae are obtained via employing some computer algebra algorithms such as the algorithms of Zeilberger, Petkovsek and van Hoeij. Some connection formulae between some Jacobi polynomials are deduced. From these connection formulae, some other linearization formulae of Chebyshev polynomials are obtained. As an application to some of the introduced formulae, a numerical algorithm for solving nonlinear Riccati differential equation is presented and implemented by applying a suitable spectral method.
NASA Astrophysics Data System (ADS)
Zhao, Ke; Ji, Yaoyao; Pan, Boan; Li, Ting
2018-02-01
The continuous-wave Near-infrared spectroscopy (NIRS) devices have been highlighted for its clinical and health care applications in noninvasive hemodynamic measurements. The baseline shift of the deviation measurement attracts lots of attentions for its clinical importance. Nonetheless current published methods have low reliability or high variability. In this study, we found a perfect polynomial fitting function for baseline removal, using NIRS. Unlike previous studies on baseline correction for near-infrared spectroscopy evaluation of non-hemodynamic particles, we focused on baseline fitting and corresponding correction method for NIRS and found that the polynomial fitting function at 4th order is greater than the function at 2nd order reported in previous research. Through experimental tests of hemodynamic parameters of the solid phantom, we compared the fitting effect between the 4th order polynomial and the 2nd order polynomial, by recording and analyzing the R values and the SSE (the sum of squares due to error) values. The R values of the 4th order polynomial function fitting are all higher than 0.99, which are significantly higher than the corresponding ones of 2nd order, while the SSE values of the 4th order are significantly smaller than the corresponding ones of the 2nd order. By using the high-reliable and low-variable 4th order polynomial fitting function, we are able to remove the baseline online to obtain more accurate NIRS measurements.
2015-08-31
following functions were used: where are the Legendre polynomials of degree . It is assumed that the coefficient standing with has the form...enforce relaxation rates of high order moments, higher order polynomial basis functions are used. The use of high order polynomials results in strong...enforced while only polynomials up to second degree were used in the representation of the collision frequency. It can be seen that the new model
Zernike Basis to Cartesian Transformations
NASA Astrophysics Data System (ADS)
Mathar, R. J.
2009-12-01
The radial polynomials of the 2D (circular) and 3D (spherical) Zernike functions are tabulated as powers of the radial distance. The reciprocal tabulation of powers of the radial distance in series of radial polynomials is also given, based on projections that take advantage of the orthogonality of the polynomials over the unit interval. They play a role in the expansion of products of the polynomials into sums, which is demonstrated by some examples. Multiplication of the polynomials by the angular bases (azimuth, polar angle) defines the Zernike functions, for which we derive transformations to and from the Cartesian coordinate system centered at the middle of the circle or sphere.
Minimum Sobolev norm interpolation of scattered derivative data
NASA Astrophysics Data System (ADS)
Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.
2018-07-01
We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.
Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan
2012-01-01
Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.
Approximating smooth functions using algebraic-trigonometric polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharapudinov, Idris I
2011-01-14
The problem under consideration is that of approximating classes of smooth functions by algebraic-trigonometric polynomials of the form p{sub n}(t)+{tau}{sub m}(t), where p{sub n}(t) is an algebraic polynomial of degree n and {tau}{sub m}(t)=a{sub 0}+{Sigma}{sub k=1}{sup m}a{sub k} cos k{pi}t + b{sub k} sin k{pi}t is a trigonometric polynomial of order m. The precise order of approximation by such polynomials in the classes W{sup r}{sub {infinity}(}M) and an upper bound for similar approximations in the class W{sup r}{sub p}(M) with 4/3
Polynomial asymptotes of the second kind
NASA Astrophysics Data System (ADS)
Dobbs, David E.
2011-03-01
This note uses the analytic notion of asymptotic functions to study when a function is asymptotic to a polynomial function. Along with associated existence and uniqueness results, this kind of asymptotic behaviour is related to the type of asymptote that was recently defined in a more geometric way. Applications are given to rational functions and conics. Prerequisites include the division algorithm for polynomials with coefficients in the field of real numbers and elementary facts about limits from calculus. This note could be used as enrichment material in courses ranging from Calculus to Real Analysis to Abstract Algebra.
Recurrence approach and higher order polynomial algebras for superintegrable monopole systems
NASA Astrophysics Data System (ADS)
Hoque, Md Fazlul; Marquette, Ian; Zhang, Yao-Zhong
2018-05-01
We revisit the MIC-harmonic oscillator in flat space with monopole interaction and derive the polynomial algebra satisfied by the integrals of motion and its energy spectrum using the ad hoc recurrence approach. We introduce a superintegrable monopole system in a generalized Taub-Newman-Unti-Tamburino (NUT) space. The Schrödinger equation of this model is solved in spherical coordinates in the framework of Stäckel transformation. It is shown that wave functions of the quantum system can be expressed in terms of the product of Laguerre and Jacobi polynomials. We construct ladder and shift operators based on the corresponding wave functions and obtain the recurrence formulas. By applying these recurrence relations, we construct higher order algebraically independent integrals of motion. We show that the integrals form a polynomial algebra. We construct the structure functions of the polynomial algebra and obtain the degenerate energy spectra of the model.
Verifying the error bound of numerical computation implemented in computer systems
Sawada, Jun
2013-03-12
A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.
On computing closed forms for summations. [polynomials and rational functions
NASA Technical Reports Server (NTRS)
Moenck, R.
1977-01-01
The problem of finding closed forms for a summation involving polynomials and rational functions is considered. A method closely related to Hermite's method for integration of rational functions derived. The method expresses the sum of a rational function as a rational function part and a transcendental part involving derivatives of the gamma function.
Vehicle Sprung Mass Estimation for Rough Terrain
2011-03-01
distributions are greater than zero. The multivariate polynomials are functions of the Legendre polynomials (Poularikas (1999...developed methods based on polynomial chaos theory and on the maximum likelihood approach to estimate the most likely value of the vehicle sprung...mass. The polynomial chaos estimator is compared to benchmark algorithms including recursive least squares, recursive total least squares, extended
NASA Technical Reports Server (NTRS)
Arbuckle, P. D.; Sliwa, S. M.; Roy, M. L.; Tiffany, S. H.
1985-01-01
A computer program for interactively developing least-squares polynomial equations to fit user-supplied data is described. The program is characterized by the ability to compute the polynomial equations of a surface fit through data that are a function of two independent variables. The program utilizes the Langley Research Center graphics packages to display polynomial equation curves and data points, facilitating a qualitative evaluation of the effectiveness of the fit. An explanation of the fundamental principles and features of the program, as well as sample input and corresponding output, are included.
Polynomials to model the growth of young bulls in performance tests.
Scalez, D C B; Fragomeni, B O; Passafaro, T L; Pereira, I G; Toral, F L B
2014-03-01
The use of polynomial functions to describe the average growth trajectory and covariance functions of Nellore and MA (21/32 Charolais+11/32 Nellore) young bulls in performance tests was studied. The average growth trajectories and additive genetic and permanent environmental covariance functions were fit with Legendre (linear through quintic) and quadratic B-spline (with two to four intervals) polynomials. In general, the Legendre and quadratic B-spline models that included more covariance parameters provided a better fit with the data. When comparing models with the same number of parameters, the quadratic B-spline provided a better fit than the Legendre polynomials. The quadratic B-spline with four intervals provided the best fit for the Nellore and MA groups. The fitting of random regression models with different types of polynomials (Legendre polynomials or B-spline) affected neither the genetic parameters estimates nor the ranking of the Nellore young bulls. However, fitting different type of polynomials affected the genetic parameters estimates and the ranking of the MA young bulls. Parsimonious Legendre or quadratic B-spline models could be used for genetic evaluation of body weight of Nellore young bulls in performance tests, whereas these parsimonious models were less efficient for animals of the MA genetic group owing to limited data at the extreme ages.
Zeros and logarithmic asymptotics of Sobolev orthogonal polynomials for exponential weights
NASA Astrophysics Data System (ADS)
Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.
2009-12-01
We obtain the (contracted) weak zero asymptotics for orthogonal polynomials with respect to Sobolev inner products with exponential weights in the real semiaxis, of the form , with [gamma]>0, which include as particular cases the counterparts of the so-called Freud (i.e., when [phi] has a polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) weights. In addition, the boundness of the distance of the zeros of these Sobolev orthogonal polynomials to the convex hull of the support and, as a consequence, a result on logarithmic asymptotics are derived.
On a Family of Multivariate Modified Humbert Polynomials
Aktaş, Rabia; Erkuş-Duman, Esra
2013-01-01
This paper attempts to present a multivariable extension of generalized Humbert polynomials. The results obtained here include various families of multilinear and multilateral generating functions, miscellaneous properties, and also some special cases for these multivariable polynomials. PMID:23935411
Tisserand's polynomials and inclination functions in the theory of artificial earth satellites
NASA Astrophysics Data System (ADS)
Aksenov, E. P.
1986-03-01
The connection between Tisserand's polynomials and inclination functions in the theory of motion of artificial earth satellites is established in the paper. The most important properties of these special functions of celestial mechanics are presented. The problem of expanding the perturbation function in satellite problems is discussed.
Least-Squares Adaptive Control Using Chebyshev Orthogonal Polynomials
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Burken, John; Ishihara, Abraham
2011-01-01
This paper presents a new adaptive control approach using Chebyshev orthogonal polynomials as basis functions in a least-squares functional approximation. The use of orthogonal basis functions improves the function approximation significantly and enables better convergence of parameter estimates. Flight control simulations demonstrate the effectiveness of the proposed adaptive control approach.
Analytical Phase Equilibrium Function for Mixtures Obeying Raoult's and Henry's Laws
NASA Astrophysics Data System (ADS)
Hayes, Robert
When a mixture of two substances exists in both the liquid and gas phase at equilibrium, Raoults and Henry's laws (ideal solution and ideal dilute solution approximations) can be used to estimate the gas and liquid mole fractions at the extremes of either very little solute or solvent. By assuming that a cubic polynomial can reasonably approximate the intermediate values to these extremes as a function of mole fraction, the cubic polynomial is solved and presented. A closed form equation approximating the pressure dependence on mole fraction of the constituents is thereby obtained. As a first approximation, this is a very simple and potentially useful means to estimate gas and liquid mole fractions of equilibrium mixtures. Mixtures with an azeotrope require additional attention if this type of approach is to be utilized. This work supported in part by federal Grant NRC-HQ-84-14-G-0059.
Coherent orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Celeghini, E., E-mail: celeghini@fi.infn.it; Olmo, M.A. del, E-mail: olmo@fta.uva.es
2013-08-15
We discuss a fundamental characteristic of orthogonal polynomials, like the existence of a Lie algebra behind them, which can be added to their other relevant aspects. At the basis of the complete framework for orthogonal polynomials we include thus–in addition to differential equations, recurrence relations, Hilbert spaces and square integrable functions–Lie algebra theory. We start here from the square integrable functions on the open connected subset of the real line whose bases are related to orthogonal polynomials. All these one-dimensional continuous spaces allow, besides the standard uncountable basis (|x〉), for an alternative countable basis (|n〉). The matrix elements that relatemore » these two bases are essentially the orthogonal polynomials: Hermite polynomials for the line and Laguerre and Legendre polynomials for the half-line and the line interval, respectively. Differential recurrence relations of orthogonal polynomials allow us to realize that they determine an infinite-dimensional irreducible representation of a non-compact Lie algebra, whose second order Casimir C gives rise to the second order differential equation that defines the corresponding family of orthogonal polynomials. Thus, the Weyl–Heisenberg algebra h(1) with C=0 for Hermite polynomials and su(1,1) with C=−1/4 for Laguerre and Legendre polynomials are obtained. Starting from the orthogonal polynomials the Lie algebra is extended both to the whole space of the L{sup 2} functions and to the corresponding Universal Enveloping Algebra and transformation group. Generalized coherent states from each vector in the space L{sup 2} and, in particular, generalized coherent polynomials are thus obtained. -- Highlights: •Fundamental characteristic of orthogonal polynomials (OP): existence of a Lie algebra. •Differential recurrence relations of OP determine a unitary representation of a non-compact Lie group. •2nd order Casimir originates a 2nd order differential equation that defines the corresponding OP family. •Generalized coherent polynomials are obtained from OP.« less
Independence polynomial and matching polynomial of the Koch network
NASA Astrophysics Data System (ADS)
Liao, Yunhua; Xie, Xiaoliang
2015-11-01
The lattice gas model and the monomer-dimer model are two classical models in statistical mechanics. It is well known that the partition functions of these two models are associated with the independence polynomial and the matching polynomial in graph theory, respectively. Both polynomials have been shown to belong to the “#P-complete” class, which indicate the problems are computationally “intractable”. We consider these two polynomials of the Koch networks which are scale-free with small-world effects. Explicit recurrences are derived, and explicit formulae are presented for the number of independent sets of a certain type.
Wavefront analysis from its slope data
NASA Astrophysics Data System (ADS)
Mahajan, Virendra N.; Acosta, Eva
2017-08-01
In the aberration analysis of a wavefront over a certain domain, the polynomials that are orthogonal over and represent balanced wave aberrations for this domain are used. For example, Zernike circle polynomials are used for the analysis of a circular wavefront. Similarly, the annular polynomials are used to analyze the annular wavefronts for systems with annular pupils, as in a rotationally symmetric two-mirror system, such as the Hubble space telescope. However, when the data available for analysis are the slopes of a wavefront, as, for example, in a Shack- Hartmann sensor, we can integrate the slope data to obtain the wavefront data, and then use the orthogonal polynomials to obtain the aberration coefficients. An alternative is to find vector functions that are orthogonal to the gradients of the wavefront polynomials, and obtain the aberration coefficients directly as the inner products of these functions with the slope data. In this paper, we show that an infinite number of vector functions can be obtained in this manner. We show further that the vector functions that are irrotational are unique and propagate minimum uncorrelated additive random noise from the slope data to the aberration coefficients.
Discrete-time state estimation for stochastic polynomial systems over polynomial observations
NASA Astrophysics Data System (ADS)
Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.
2018-07-01
This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.
Generating the patterns of variation with GeoGebra: the case of polynomial approximations
NASA Astrophysics Data System (ADS)
Attorps, Iiris; Björk, Kjell; Radic, Mirko
2016-01-01
In this paper, we report a teaching experiment regarding the theory of polynomial approximations at the university mathematics teaching in Sweden. The experiment was designed by applying Variation theory and by using the free dynamic mathematics software GeoGebra. The aim of this study was to investigate if the technology-assisted teaching of Taylor polynomials compared with traditional way of work at the university level can support the teaching and learning of mathematical concepts and ideas. An engineering student group (n = 19) was taught Taylor polynomials with the assistance of GeoGebra while a control group (n = 18) was taught in a traditional way. The data were gathered by video recording of the lectures, by doing a post-test concerning Taylor polynomials in both groups and by giving one question regarding Taylor polynomials at the final exam for the course in Real Analysis in one variable. In the analysis of the lectures, we found Variation theory combined with GeoGebra to be a potentially powerful tool for revealing some critical aspects of Taylor Polynomials. Furthermore, the research results indicated that applying Variation theory, when planning the technology-assisted teaching, supported and enriched students' learning opportunities in the study group compared with the control group.
Polynomial Asymptotes of the Second Kind
ERIC Educational Resources Information Center
Dobbs, David E.
2011-01-01
This note uses the analytic notion of asymptotic functions to study when a function is asymptotic to a polynomial function. Along with associated existence and uniqueness results, this kind of asymptotic behaviour is related to the type of asymptote that was recently defined in a more geometric way. Applications are given to rational functions and…
2014-04-01
The CG and DG horizontal discretization employs high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto- Legendre ...and DG horizontal discretization employs high-order nodal basis functions 29 associated with Lagrange polynomials based on Gauss-Lobatto- Legendre ...Inside 235 each element we build ( 1)N + Gauss-Lobatto- Legendre (GLL) quadrature points, where N 236 indicate the polynomial order of the basis
NASA Astrophysics Data System (ADS)
Soare, S.; Yoon, J. W.; Cazacu, O.
2007-05-01
With few exceptions, non-quadratic homogeneous polynomials have received little attention as possible candidates for yield functions. One reason might be that not every such polynomial is a convex function. In this paper we show that homogeneous polynomials can be used to develop powerful anisotropic yield criteria, and that imposing simple constraints on the identification process leads, aposteriori, to the desired convexity property. It is shown that combinations of such polynomials allow for modeling yielding properties of metallic materials with any crystal structure, i.e. both cubic and hexagonal which display strength differential effects. Extensions of the proposed criteria to 3D stress states are also presented. We apply these criteria to the description of the aluminum alloy AA2090T3. We prove that a sixth order orthotropic homogeneous polynomial is capable of a satisfactory description of this alloy. Next, applications to the deep drawing of a cylindrical cup are presented. The newly proposed criteria were implemented as UMAT subroutines into the commercial FE code ABAQUS. We were able to predict six ears on the AA2090T3 cup's profile. Finally, we show that a tension/compression asymmetry in yielding can have an important effect on the earing profile.
Venus radar mapper attitude reference quaternion
NASA Technical Reports Server (NTRS)
Lyons, D. T.
1986-01-01
Polynomial functions of time are used to specify the components of the quaternion which represents the nominal attitude of the Venus Radar mapper spacecraft during mapping. The following constraints must be satisfied in order to obtain acceptable synthetic array radar data: the nominal attitude function must have a large dynamic range, the sensor orientation must be known very accurately, the attitude reference function must use as little memory as possible, and the spacecraft must operate autonomously. Fitting polynomials to the components of the desired quaternion function is a straightforward method for providing a very dynamic nominal attitude using a minimum amount of on-board computer resources. Although the attitude from the polynomials may not be exactly the one requested by the radar designers, the polynomial coefficients are known, so they do not contribute to the attitude uncertainty. Frequent coefficient updates are not required, so the spacecraft can operate autonomously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lüchow, Arne, E-mail: luechow@rwth-aachen.de; Jülich Aachen Research Alliance; Sturm, Alexander
2015-02-28
Jastrow correlation factors play an important role in quantum Monte Carlo calculations. Together with an orbital based antisymmetric function, they allow the construction of highly accurate correlation wave functions. In this paper, a generic expansion of the Jastrow correlation function in terms of polynomials that satisfy both the electron exchange symmetry constraint and the cusp conditions is presented. In particular, an expansion of the three-body electron-electron-nucleus contribution in terms of cuspless homogeneous symmetric polynomials is proposed. The polynomials can be expressed in fairly arbitrary scaling function allowing a generic implementation of the Jastrow factor. It is demonstrated with a fewmore » examples that the new Jastrow factor achieves 85%–90% of the total correlation energy in a variational quantum Monte Carlo calculation and more than 90% of the diffusion Monte Carlo correlation energy.« less
Narimani, Mohammand; Lam, H K; Dilmaghani, R; Wolfe, Charles
2011-06-01
Relaxed linear-matrix-inequality-based stability conditions for fuzzy-model-based control systems with imperfect premise matching are proposed. First, the derivative of the Lyapunov function, containing the product terms of the fuzzy model and fuzzy controller membership functions, is derived. Then, in the partitioned operating domain of the membership functions, the relations between the state variables and the mentioned product terms are represented by approximated polynomials in each subregion. Next, the stability conditions containing the information of all subsystems and the approximated polynomials are derived. In addition, the concept of the S-procedure is utilized to release the conservativeness caused by considering the whole operating region for approximated polynomials. It is shown that the well-known stability conditions can be special cases of the proposed stability conditions. Simulation examples are given to illustrate the validity of the proposed approach.
Decision support system for diabetic retinopathy using discrete wavelet transform.
Noronha, K; Acharya, U R; Nayak, K P; Kamath, S; Bhandary, S V
2013-03-01
Prolonged duration of the diabetes may affect the tiny blood vessels of the retina causing diabetic retinopathy. Routine eye screening of patients with diabetes helps to detect diabetic retinopathy at the early stage. It is very laborious and time-consuming for the doctors to go through many fundus images continuously. Therefore, decision support system for diabetic retinopathy detection can reduce the burden of the ophthalmologists. In this work, we have used discrete wavelet transform and support vector machine classifier for automated detection of normal and diabetic retinopathy classes. The wavelet-based decomposition was performed up to the second level, and eight energy features were extracted. Two energy features from the approximation coefficients of two levels and six energy values from the details in three orientations (horizontal, vertical and diagonal) were evaluated. These features were fed to the support vector machine classifier with various kernel functions (linear, radial basis function, polynomial of orders 2 and 3) to evaluate the highest classification accuracy. We obtained the highest average classification accuracy, sensitivity and specificity of more than 99% with support vector machine classifier (polynomial kernel of order 3) using three discrete wavelet transform features. We have also proposed an integrated index called Diabetic Retinopathy Risk Index using clinically significant wavelet energy features to identify normal and diabetic retinopathy classes using just one number. We believe that this (Diabetic Retinopathy Risk Index) can be used as an adjunct tool by the doctors during the eye screening to cross-check their diagnosis.
A family of Nikishin systems with periodic recurrence coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delvaux, Steven; Lopez, Abey; Lopez, Guillermo L
2013-01-31
Suppose we have a Nikishin system of p measures with the kth generating measure of the Nikishin system supported on an interval {Delta}{sub k} subset of R with {Delta}{sub k} Intersection {Delta}{sub k+1} = Empty-Set for all k. It is well known that the corresponding staircase sequence of multiple orthogonal polynomials satisfies a (p+2)-term recurrence relation whose recurrence coefficients, under appropriate assumptions on the generating measures, have periodic limits of period p. (The limit values depend only on the positions of the intervals {Delta}{sub k}.) Taking these periodic limit values as the coefficients of a new (p+2)-term recurrence relation, wemore » construct a canonical sequence of monic polynomials {l_brace}P{sub n}{r_brace}{sub n=0}{sup {infinity}}, the so-called Chebyshev-Nikishin polynomials. We show that the polynomials P{sub n} themselves form a sequence of multiple orthogonal polynomials with respect to some Nikishin system of measures, with the kth generating measure being absolutely continuous on {Delta}{sub k}. In this way we generalize a result of the third author and Rocha [22] for the case p=2. The proof uses the connection with block Toeplitz matrices, and with a certain Riemann surface of genus zero. We also obtain strong asymptotics and an exact Widom-type formula for functions of the second kind of the Nikishin system for {l_brace}P{sub n}{r_brace}{sub n=0}{sup {infinity}}. Bibliography: 27 titles.« less
Statistics of Data Fitting: Flaws and Fixes of Polynomial Analysis of Channeled Spectra
NASA Astrophysics Data System (ADS)
Karstens, William; Smith, David
2013-03-01
Starting from general statistical principles, we have critically examined Baumeister's procedure* for determining the refractive index of thin films from channeled spectra. Briefly, the method assumes that the index and interference fringe order may be approximated by polynomials quadratic and cubic in photon energy, respectively. The coefficients of the polynomials are related by differentiation, which is equivalent to comparing energy differences between fringes. However, we find that when the fringe order is calculated from the published IR index for silicon* and then analyzed with Baumeister's procedure, the results do not reproduce the original index. This problem has been traced to 1. Use of unphysical powers in the polynomials (e.g., time-reversal invariance requires that the index is an even function of photon energy), and 2. Use of insufficient terms of the correct parity. Exclusion of unphysical terms and addition of quartic and quintic terms to the index and order polynomials yields significantly better fits with fewer parameters. This represents a specific example of using statistics to determine if the assumed fitting model adequately captures the physics contained in experimental data. The use of analysis of variance (ANOVA) and the Durbin-Watson statistic to test criteria for the validity of least-squares fitting will be discussed. *D.F. Edwards and E. Ochoa, Appl. Opt. 19, 4130 (1980). Supported in part by the US Department of Energy, Office of Nuclear Physics under contract DE-AC02-06CH11357.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaisultanov, Rashid; Eichler, David
2011-03-15
The dielectric tensor is obtained for a general anisotropic distribution function that is represented as a sum over Legendre polynomials. The result is valid over all of k-space. We obtain growth rates for the Weibel instability for some basic examples of distribution functions.
Correlations of RMT characteristic polynomials and integrability: Hermitean matrices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osipov, Vladimir Al., E-mail: Vladimir.Osipov@uni-due.d; Kanzieper, Eugene, E-mail: Eugene.Kanzieper@hit.ac.i; Department of Physics of Complex Systems, Weizmann Institute of Science, Rehovot 76100
Integrable theory is formulated for correlation functions of characteristic polynomials associated with invariant non-Gaussian ensembles of Hermitean random matrices. By embedding the correlation functions of interest into a more general theory of {tau} functions, we (i) identify a zoo of hierarchical relations satisfied by {tau} functions in an abstract infinite-dimensional space and (ii) present a technology to translate these relations into hierarchically structured nonlinear differential equations describing the correlation functions of characteristic polynomials in the physical, spectral space. Implications of this formalism for fermionic, bosonic, and supersymmetric variations of zero-dimensional replica field theories are discussed at length. A particular emphasismore » is placed on the phenomenon of fermionic-bosonic factorisation of random-matrix-theory correlation functions.« less
Analytic Evolution of Singular Distribution Amplitudes in QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tandogan Kunkel, Asli
2014-08-01
Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less
Umbral Calculus and Holonomic Modules in Positive Characteristic
NASA Astrophysics Data System (ADS)
Kochubei, Anatoly N.
2006-03-01
In the framework of analysis over local fields of positive characteristic, we develop algebraic tools for introducing and investigating various polynomial systems. In this survey paper we describe a function field version of umbral calculus developed on the basis of a relation of binomial type satisfied by the Carlitz polynomials. We consider modules over the Weyl-Carlitz ring, a function field counterpart of the Weyl algebra. It is shown that some basic objects of function field arithmetic, like the Carlitz module, Thakur's hypergeometric polynomials, and analogs of binomial coefficients arising in the positive characteristic version of umbral calculus, generate holonomic modules.
Polynomial solution of quantum Grassmann matrices
NASA Astrophysics Data System (ADS)
Tierz, Miguel
2017-05-01
We study a model of quantum mechanical fermions with matrix-like index structure (with indices N and L) and quartic interactions, recently introduced by Anninos and Silva. We compute the partition function exactly with q-deformed orthogonal polynomials (Stieltjes-Wigert polynomials), for different values of L and arbitrary N. From the explicit evaluation of the thermal partition function, the energy levels and degeneracies are determined. For a given L, the number of states of different energy is quadratic in N, which implies an exponential degeneracy of the energy levels. We also show that at high-temperature we have a Gaussian matrix model, which implies a symmetry that swaps N and L, together with a Wick rotation of the spectral parameter. In this limit, we also write the partition function, for generic L and N, in terms of a single generalized Hermite polynomial.
NASA Astrophysics Data System (ADS)
Doha, E. H.; Ahmed, H. M.
2004-08-01
A formula expressing explicitly the derivatives of Bessel polynomials of any degree and for any order in terms of the Bessel polynomials themselves is proved. Another explicit formula, which expresses the Bessel expansion coefficients of a general-order derivative of an infinitely differentiable function in terms of its original Bessel coefficients, is also given. A formula for the Bessel coefficients of the moments of one single Bessel polynomial of certain degree is proved. A formula for the Bessel coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Bessel coefficients is also obtained. Application of these formulae for solving ordinary differential equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Bessel-Bessel polynomials is described. An explicit formula for these coefficients between Jacobi and Bessel polynomials is given, of which the ultraspherical polynomial and its consequences are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Bessel and Hermite-Bessel are also developed.
Optimization of Cubic Polynomial Functions without Calculus
ERIC Educational Resources Information Center
Taylor, Ronald D., Jr.; Hansen, Ryan
2008-01-01
In algebra and precalculus courses, students are often asked to find extreme values of polynomial functions in the context of solving an applied problem; but without the notion of derivative, something is lost. Either the functions are reduced to quadratics, since students know the formula for the vertex of a parabola, or solutions are…
A Stochastic Mixed Finite Element Heterogeneous Multiscale Method for Flow in Porous Media
2010-08-01
applicable for flow in porous media has drawn significant interest in the last few years. Several techniques like generalized polynomial chaos expansions (gPC...represents the stochastic solution as a polynomial approxima- tion. This interpolant is constructed via independent function calls to the de- terministic...of orthogonal polynomials [34,38] or sparse grid approximations [39–41]. It is well known that the global polynomial interpolation cannot resolve lo
A Set of Orthogonal Polynomials That Generalize the Racah Coefficients or 6 - j Symbols.
1978-03-01
Generalized Hypergeometric Functions, Cambridge Univ. Press, Cambridge, 1966. [11] D. Stanton, Some basic hypergeometric polynomials arising from... Some bas ic hypergeometr ic an a logues of the classical orthogonal polynomials and applications , to appear. [3] C. de Boor and G. H. Golub , The...Report #1833 A SET OF ORTHOGONAL POLYNOMIALS THAT GENERALIZE THE RACAR COEFFICIENTS OR 6 — j SYMBOLS Richard Askey and James Wilson •
Constant-Round Concurrent Zero Knowledge From Falsifiable Assumptions
2013-01-01
assumptions (e.g., [DS98, Dam00, CGGM00, Gol02, PTV12, GJO+12]), or in alternative models (e.g., super -polynomial-time simulation [Pas03b, PV10]). In the...T (·)-time computations, where T (·) is some “nice” (slightly) super -polynomial function (e.g., T (n) = nlog log logn). We refer to such proof...put a cap on both using a (slightly) super -polynomial function, and thus to guarantee soundness of the concurrent zero-knowledge protocol, we need
2014-08-04
Chebyshev coefficients of both r and q decay exponentially, although those of r decay at a slightly slower rate. 10.2. Evaluation of Legendre polynomials ...In this experiment, we compare the cost of evaluating Legendre polynomials of large order using the standard recurrence relation with the cost of...doing so with a nonoscillatory phase function. For any integer n ě 0, the Legendre polynomial Pnpxq of order n is a solution of the second order
2009-03-01
the 1- D local basis functions. The 1-D Lagrange polynomial local basis function, using Legendre -Gauss-Lobatto interpolation points, was defined by...cases were run using 10th order polynomials , with contours from -0.05 to 0.525 K with an interval of 0.025 K...after 700 s for reso- lutions: (a) 20, (b) 10, and (c) 5 m. All cases were run using 10th order polynomials , with contours from -0.05 to 0.525 K
On the coefficients of integrated expansions of Bessel polynomials
NASA Astrophysics Data System (ADS)
Doha, E. H.; Ahmed, H. M.
2006-03-01
A new formula expressing explicitly the integrals of Bessel polynomials of any degree and for any order in terms of the Bessel polynomials themselves is proved. Another new explicit formula relating the Bessel coefficients of an expansion for infinitely differentiable function that has been integrated an arbitrary number of times in terms of the coefficients of the original expansion of the function is also established. An application of these formulae for solving ordinary differential equations with varying coefficients is discussed.
Orthogonal Polynomials Associated with Complementary Chain Sequences
NASA Astrophysics Data System (ADS)
Behera, Kiran Kumar; Sri Ranga, A.; Swaminathan, A.
2016-07-01
Using the minimal parameter sequence of a given chain sequence, we introduce the concept of complementary chain sequences, which we view as perturbations of chain sequences. Using the relation between these complementary chain sequences and the corresponding Verblunsky coefficients, the para-orthogonal polynomials and the associated Szegő polynomials are analyzed. Two illustrations, one involving Gaussian hypergeometric functions and the other involving Carathéodory functions are also provided. A connection between these two illustrations by means of complementary chain sequences is also observed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vignat, C.; Lamberti, P. W.
2009-10-15
Recently, Carinena, et al. [Ann. Phys. 322, 434 (2007)] introduced a new family of orthogonal polynomials that appear in the wave functions of the quantum harmonic oscillator in two-dimensional constant curvature spaces. They are a generalization of the Hermite polynomials and will be called curved Hermite polynomials in the following. We show that these polynomials are naturally related to the relativistic Hermite polynomials introduced by Aldaya et al. [Phys. Lett. A 156, 381 (1991)], and thus are Jacobi polynomials. Moreover, we exhibit a natural bijection between the solutions of the quantum harmonic oscillator on negative curvature spaces and on positivemore » curvature spaces. At last, we show a maximum entropy property for the ground states of these oscillators.« less
Piecewise polynomial representations of genomic tracks.
Tarabichi, Maxime; Detours, Vincent; Konopka, Tomasz
2012-01-01
Genomic data from micro-array and sequencing projects consist of associations of measured values to chromosomal coordinates. These associations can be thought of as functions in one dimension and can thus be stored, analyzed, and interpreted as piecewise-polynomial curves. We present a general framework for building piecewise polynomial representations of genome-scale signals and illustrate some of its applications via examples. We show that piecewise constant segmentation, a typical step in copy-number analyses, can be carried out within this framework for both array and (DNA) sequencing data offering advantages over existing methods in each case. Higher-order polynomial curves can be used, for example, to detect trends and/or discontinuities in transcription levels from RNA-seq data. We give a concrete application of piecewise linear functions to diagnose and quantify alignment quality at exon borders (splice sites). Our software (source and object code) for building piecewise polynomial models is available at http://sourceforge.net/projects/locsmoc/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soare, S.; Cazacu, O.; Yoon, J. W.
With few exceptions, non-quadratic homogeneous polynomials have received little attention as possible candidates for yield functions. One reason might be that not every such polynomial is a convex function. In this paper we show that homogeneous polynomials can be used to develop powerful anisotropic yield criteria, and that imposing simple constraints on the identification process leads, aposteriori, to the desired convexity property. It is shown that combinations of such polynomials allow for modeling yielding properties of metallic materials with any crystal structure, i.e. both cubic and hexagonal which display strength differential effects. Extensions of the proposed criteria to 3D stressmore » states are also presented. We apply these criteria to the description of the aluminum alloy AA2090T3. We prove that a sixth order orthotropic homogeneous polynomial is capable of a satisfactory description of this alloy. Next, applications to the deep drawing of a cylindrical cup are presented. The newly proposed criteria were implemented as UMAT subroutines into the commercial FE code ABAQUS. We were able to predict six ears on the AA2090T3 cup's profile. Finally, we show that a tension/compression asymmetry in yielding can have an important effect on the earing profile.« less
Sotiropoulou, P; Fountos, G; Martini, N; Koukou, V; Michail, C; Kandarakis, I; Nikiforidis, G
2016-12-01
An X-ray dual energy (XRDE) method was examined, using polynomial nonlinear approximation of inverse functions for the determination of the bone Calcium-to-Phosphorus (Ca/P) mass ratio. Inverse fitting functions with the least-squares estimation were used, to determine calcium and phosphate thicknesses. The method was verified by measuring test bone phantoms with a dedicated dual energy system and compared with previously published dual energy data. The accuracy in the determination of the calcium and phosphate thicknesses improved with the polynomial nonlinear inverse function method, introduced in this work, (ranged from 1.4% to 6.2%), compared to the corresponding linear inverse function method (ranged from 1.4% to 19.5%). Copyright © 2016 Elsevier Ltd. All rights reserved.
Combinatorial theory of Macdonald polynomials I: proof of Haglund's formula.
Haglund, J; Haiman, M; Loehr, N
2005-02-22
Haglund recently proposed a combinatorial interpretation of the modified Macdonald polynomials H(mu). We give a combinatorial proof of this conjecture, which establishes the existence and integrality of H(mu). As corollaries, we obtain the cocharge formula of Lascoux and Schutzenberger for Hall-Littlewood polynomials, a formula of Sahi and Knop for Jack's symmetric functions, a generalization of this result to the integral Macdonald polynomials J(mu), a formula for H(mu) in terms of Lascoux-Leclerc-Thibon polynomials, and combinatorial expressions for the Kostka-Macdonald coefficients K(lambda,mu) when mu is a two-column shape.
Pedestrian detection in crowded scenes with the histogram of gradients principle
NASA Astrophysics Data System (ADS)
Sidla, O.; Rosner, M.; Lypetskyy, Y.
2006-10-01
This paper describes a close to real-time scale invariant implementation of a pedestrian detector system which is based on the Histogram of Oriented Gradients (HOG) principle. Salient HOG features are first selected from a manually created very large database of samples with an evolutionary optimization procedure that directly trains a polynomial Support Vector Machine (SVM). Real-time operation is achieved by a cascaded 2-step classifier which uses first a very fast linear SVM (with the same features as the polynomial SVM) to reject most of the irrelevant detections and then computes the decision function with a polynomial SVM on the remaining set of candidate detections. Scale invariance is achieved by running the detector of constant size on scaled versions of the original input images and by clustering the results over all resolutions. The pedestrian detection system has been implemented in two versions: i) fully body detection, and ii) upper body only detection. The latter is especially suited for very busy and crowded scenarios. On a state-of-the-art PC it is able to run at a frequency of 8 - 20 frames/sec.
Discrete Tchebycheff orthonormal polynomials and applications
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.
Polynomial Graphs and Symmetry
ERIC Educational Resources Information Center
Goehle, Geoff; Kobayashi, Mitsuo
2013-01-01
Most quadratic functions are not even, but every parabola has symmetry with respect to some vertical line. Similarly, every cubic has rotational symmetry with respect to some point, though most cubics are not odd. We show that every polynomial has at most one point of symmetry and give conditions under which the polynomial has rotational or…
Numeric Function Generators Using Decision Diagrams for Discrete Functions
2009-05-01
Taylor series and Chebyshev series. Since polynomial functions can be realized with multipliers and adders, any numeric functions can be realized in...NFGs from the decision diagrams. Since nu- meric functions can be expanded into polynomial functions, such as a Taylor series, in this section, we use...pp. 107–114, July 1995. [13] T. Kam, T. Villa, R. K. Brayton , and A. L. Sangiovanni- Vincentelli, “Multi-valued decision diagrams: Theory and appli
NASA Technical Reports Server (NTRS)
Chang, F.-C.; Mott, H.
1974-01-01
This paper presents a technique for the partial-fraction expansion of functions which are ratios of polynomials with real coefficients. The expansion coefficients are determined by writing the polynomials as Taylor's series and obtaining the Laurent series expansion of the function. The general formula for the inverse Laplace transform is also derived.
Derivatives of random matrix characteristic polynomials with applications to elliptic curves
NASA Astrophysics Data System (ADS)
Snaith, N. C.
2005-12-01
The value distribution of derivatives of characteristic polynomials of matrices from SO(N) is calculated at the point 1, the symmetry point on the unit circle of the eigenvalues of these matrices. We consider subsets of matrices from SO(N) that are constrained to have at least n eigenvalues equal to 1 and investigate the first non-zero derivative of the characteristic polynomial at that point. The connection between the values of random matrix characteristic polynomials and values of L-functions in families has been well established. The motivation for this work is the expectation that through this connection with L-functions derived from families of elliptic curves, and using the Birch and Swinnerton-Dyer conjecture to relate values of the L-functions to the rank of elliptic curves, random matrix theory will be useful in probing important questions concerning these ranks.
An Introduction to Lagrangian Differential Calculus.
ERIC Educational Resources Information Center
Schremmer, Francesca; Schremmer, Alain
1990-01-01
Illustrates how Lagrange's approach applies to the differential calculus of polynomial functions when approximations are obtained. Discusses how to obtain polynomial approximations in other cases. (YP)
Michael, Dada O; Bamidele, Awojoyogbe O; Adewale, Adesola O; Karem, Boubaker
2013-01-01
Nuclear magnetic resonance (NMR) allows for fast, accurate and noninvasive measurement of fluid flow in restricted and non-restricted media. The results of such measurements may be possible for a very small B 0 field and can be enhanced through detailed examination of generating functions that may arise from polynomial solutions of NMR flow equations in terms of Legendre polynomials and Boubaker polynomials. The generating functions of these polynomials can present an array of interesting possibilities that may be useful for understanding the basic physics of extracting relevant NMR flow information from which various hemodynamic problems can be carefully studied. Specifically, these results may be used to develop effective drugs for cardiovascular-related diseases.
Michael, Dada O.; Bamidele, Awojoyogbe O.; Adewale, Adesola O.; Karem, Boubaker
2013-01-01
Nuclear magnetic resonance (NMR) allows for fast, accurate and noninvasive measurement of fluid flow in restricted and non-restricted media. The results of such measurements may be possible for a very small B0 field and can be enhanced through detailed examination of generating functions that may arise from polynomial solutions of NMR flow equations in terms of Legendre polynomials and Boubaker polynomials. The generating functions of these polynomials can present an array of interesting possibilities that may be useful for understanding the basic physics of extracting relevant NMR flow information from which various hemodynamic problems can be carefully studied. Specifically, these results may be used to develop effective drugs for cardiovascular-related diseases. PMID:25114546
Effects of Air Drag and Lunar Third-Body Perturbations on Motion Near a Reference KAM Torus
2011-03-01
body m 1) mass of satellite; 2) order of associated Legendre polynomial n 1) mean motion; 2) degree of associated Legendre polynomial n3 mean motion...physical momentum pi ith physical momentum Pmn associated Legendre polynomial of order m and degree n q̇ physical coordinate derivatives vector, [q̇1...are constants specifying the shape of the gravitational field; and Pmn are associated Legendre polynomials . When m = n = 0, the geopotential function
NASA Astrophysics Data System (ADS)
Alhaidari, A. D.; Taiwo, T. J.
2017-02-01
Using a recent formulation of quantum mechanics without a potential function, we present a four-parameter system associated with the Wilson and Racah polynomials. The continuum scattering states are written in terms of the Wilson polynomials whose asymptotics give the scattering amplitude and phase shift. On the other hand, the finite number of discrete bound states are associated with the Racah polynomials.
Learning polynomial feedforward neural networks by genetic programming and backpropagation.
Nikolaev, N Y; Iba, H
2003-01-01
This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.
Mota, L F M; Martins, P G M A; Littiere, T O; Abreu, L R A; Silva, M A; Bonafé, C M
2018-04-01
The objective was to estimate (co)variance functions using random regression models (RRM) with Legendre polynomials, B-spline function and multi-trait models aimed at evaluating genetic parameters of growth traits in meat-type quail. A database containing the complete pedigree information of 7000 meat-type quail was utilized. The models included the fixed effects of contemporary group and generation. Direct additive genetic and permanent environmental effects, considered as random, were modeled using B-spline functions considering quadratic and cubic polynomials for each individual segment, and Legendre polynomials for age. Residual variances were grouped in four age classes. Direct additive genetic and permanent environmental effects were modeled using 2 to 4 segments and were modeled by Legendre polynomial with orders of fit ranging from 2 to 4. The model with quadratic B-spline adjustment, using four segments for direct additive genetic and permanent environmental effects, was the most appropriate and parsimonious to describe the covariance structure of the data. The RRM using Legendre polynomials presented an underestimation of the residual variance. Lesser heritability estimates were observed for multi-trait models in comparison with RRM for the evaluated ages. In general, the genetic correlations between measures of BW from hatching to 35 days of age decreased as the range between the evaluated ages increased. Genetic trend for BW was positive and significant along the selection generations. The genetic response to selection for BW in the evaluated ages presented greater values for RRM compared with multi-trait models. In summary, RRM using B-spline functions with four residual variance classes and segments were the best fit for genetic evaluation of growth traits in meat-type quail. In conclusion, RRM should be considered in genetic evaluation of breeding programs.
NASA Astrophysics Data System (ADS)
Deogracias, E. C.; Wood, J. L.; Wagner, E. C.; Kearfott, K. J.
1999-02-01
The CEPXS/ONEDANT code package was used to produce a library of depth-dose profiles for monoenergetic electrons in various materials for energies ranging from 500 keV to 5 MeV in 10 keV increments. The various materials for which depth-dose functions were derived include: lithium fluoride (LiF), aluminum oxide (Al 2O 3), beryllium oxide (BeO), calcium sulfate (CaSO 4), calcium fluoride (CaF 2), lithium boron oxide (LiBO), soft tissue, lens of the eye, adiopose, muscle, skin, glass and water. All materials data sets were fit to five polynomials, each covering a different range of electron energies, using a least squares method. The resultant three dimensional, fifth-order polynomials give the dose as a function of depth and energy for the monoenergetic electrons in each material. The polynomials can be used to describe an energy spectrum by summing the doses at a given depth for each energy, weighted by the spectral intensity for that energy. An application of the polynomial is demonstrated by explaining the energy dependence of thermoluminescent detectors (TLDs) and illustrating the relationship between TLD signal and actual shallow dose due to beta particles.
Volumetric calibration of a plenoptic camera.
Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S
2018-02-01
The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, John D.; Narayan, Akil; Zhou, Tao
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
Jakeman, John D.; Narayan, Akil; Zhou, Tao
2017-06-22
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
Leibon, Gregory; Rockmore, Daniel N.; Park, Wooram; Taintor, Robert; Chirikjian, Gregory S.
2008-01-01
We present algorithms for fast and stable approximation of the Hermite transform of a compactly supported function on the real line, attainable via an application of a fast algebraic algorithm for computing sums associated with a three-term relation. Trade-offs between approximation in bandlimit (in the Hermite sense) and size of the support region are addressed. Numerical experiments are presented that show the feasibility and utility of our approach. Generalizations to any family of orthogonal polynomials are outlined. Applications to various problems in tomographic reconstruction, including the determination of protein structure, are discussed. PMID:20027202
Some rules for polydimensional squeezing
NASA Technical Reports Server (NTRS)
Manko, Vladimir I.
1994-01-01
The review of the following results is presented: For mixed state light of N-mode electromagnetic field described by Wigner function which has generic Gaussian form, the photon distribution function is obtained and expressed explicitly in terms of Hermite polynomials of 2N-variables. The momenta of this distribution are calculated and expressed as functions of matrix invariants of the dispersion matrix. The role of new uncertainty relation depending on photon state mixing parameter is elucidated. New sum rules for Hermite polynomials of several variables are found. The photon statistics of polymode even and odd coherent light and squeezed polymode Schroedinger cat light is given explicitly. Photon distribution for polymode squeezed number states expressed in terms of multivariable Hermite polynomials is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Degroote, M.; Henderson, T. M.; Zhao, J.
We present a similarity transformation theory based on a polynomial form of a particle-hole pair excitation operator. In the weakly correlated limit, this polynomial becomes an exponential, leading to coupled cluster doubles. In the opposite strongly correlated limit, the polynomial becomes an extended Bessel expansion and yields the projected BCS wavefunction. In between, we interpolate using a single parameter. The e ective Hamiltonian is non-hermitian and this Polynomial Similarity Transformation Theory follows the philosophy of traditional coupled cluster, left projecting the transformed Hamiltonian onto subspaces of the Hilbert space in which the wave function variance is forced to be zero.more » Similarly, the interpolation parameter is obtained through minimizing the next residual in the projective hierarchy. We rationalize and demonstrate how and why coupled cluster doubles is ill suited to the strongly correlated limit whereas the Bessel expansion remains well behaved. The model provides accurate wave functions with energy errors that in its best variant are smaller than 1% across all interaction stengths. The numerical cost is polynomial in system size and the theory can be straightforwardly applied to any realistic Hamiltonian.« less
Best uniform approximation to a class of rational functions
NASA Astrophysics Data System (ADS)
Zheng, Zhitong; Yong, Jun-Hai
2007-10-01
We explicitly determine the best uniform polynomial approximation to a class of rational functions of the form 1/(x-c)2+K(a,b,c,n)/(x-c) on [a,b] represented by their Chebyshev expansion, where a, b, and c are real numbers, n-1 denotes the degree of the best approximating polynomial, and K is a constant determined by a, b, c, and n. Our result is based on the explicit determination of a phase angle [eta] in the representation of the approximation error by a trigonometric function. Moreover, we formulate an ansatz which offers a heuristic strategies to determine the best approximating polynomial to a function represented by its Chebyshev expansion. Combined with the phase angle method, this ansatz can be used to find the best uniform approximation to some more functions.
Quantum Hurwitz numbers and Macdonald polynomials
NASA Astrophysics Data System (ADS)
Harnad, J.
2016-11-01
Parametric families in the center Z(C[Sn]) of the group algebra of the symmetric group are obtained by identifying the indeterminates in the generating function for Macdonald polynomials as commuting Jucys-Murphy elements. Their eigenvalues provide coefficients in the double Schur function expansion of 2D Toda τ-functions of hypergeometric type. Expressing these in the basis of products of power sum symmetric functions, the coefficients may be interpreted geometrically as parametric families of quantum Hurwitz numbers, enumerating weighted branched coverings of the Riemann sphere. Combinatorially, they give quantum weighted sums over paths in the Cayley graph of Sn generated by transpositions. Dual pairs of bases for the algebra of symmetric functions with respect to the scalar product in which the Macdonald polynomials are orthogonal provide both the geometrical and combinatorial significance of these quantum weighted enumerative invariants.
More on rotations as spin matrix polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtright, Thomas L.
2015-09-15
Any nonsingular function of spin j matrices always reduces to a matrix polynomial of order 2j. The challenge is to find a convenient form for the coefficients of the matrix polynomial. The theory of biorthogonal systems is a useful framework to meet this challenge. Central factorial numbers play a key role in the theoretical development. Explicit polynomial coefficients for rotations expressed either as exponentials or as rational Cayley transforms are considered here. Structural features of the results are discussed and compared, and large j limits of the coefficients are examined.
Zhao, Ke; Ji, Yaoyao; Li, Yan; Li, Ting
2018-01-21
Near-infrared spectroscopy (NIRS) has become widely accepted as a valuable tool for noninvasively monitoring hemodynamics for clinical and diagnostic purposes. Baseline shift has attracted great attention in the field, but there has been little quantitative study on baseline removal. Here, we aimed to study the baseline characteristics of an in-house-built portable medical NIRS device over a long time (>3.5 h). We found that the measured baselines all formed perfect polynomial functions on phantom tests mimicking human bodies, which were identified by recent NIRS studies. More importantly, our study shows that the fourth-order polynomial function acted to distinguish performance with stable and low-computation-burden fitting calibration (R-square >0.99 for all probes) among second- to sixth-order polynomials, evaluated by the parameters R-square, sum of squares due to error, and residual. This study provides a straightforward, efficient, and quantitatively evaluated solution for online baseline removal for hemodynamic monitoring using NIRS devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sevast'yanov, E A; Sadekova, E Kh
The Bulgarian mathematicians Sendov, Popov, and Boyanov have well-known results on the asymptotic behaviour of the least deviations of 2{pi}-periodic functions in the classes H{sup {omega}} from trigonometric polynomials in the Hausdorff metric. However, the asymptotics they give are not adequate to detect a difference in, for example, the rate of approximation of functions f whose moduli of continuity {omega}(f;{delta}) differ by factors of the form (log(1/{delta})){sup {beta}}. Furthermore, a more detailed determination of the asymptotic behaviour by traditional methods becomes very difficult. This paper develops an approach based on using trigonometric snakes as approximating polynomials. The snakes of ordermore » n inscribed in the Minkowski {delta}-neighbourhood of the graph of the approximated function f provide, in a number of cases, the best approximation for f (for the appropriate choice of {delta}). The choice of {delta} depends on n and f and is based on constructing polynomial kernels adjusted to the Hausdorff metric and polynomials with special oscillatory properties. Bibliography: 19 titles.« less
Mocan, Mehmet C; Ilhan, Hacer; Gurcay, Hasmet; Dikmetas, Ozlem; Karabulut, Erdem; Erdener, Ugur; Irkec, Murat
2014-06-01
To derive a mathematical expression for the healthy upper eyelid (UE) contour and to use this expression to differentiate the normal UE curve from its abnormal configuration in the setting of blepharoptosis. The study was designed as a cross-sectional study. Fifty healthy subjects (26M/24F) and 50 patients with blepharoptosis (28M/22F) with a margin-reflex distance (MRD1) of ≤2.5 mm were recruited. A polynomial interpolation was used to approximate UE curve. The polynomial coefficients were calculated from digital eyelid images of all participants using a set of operator defined points along the UE curve. Coefficients up to the fourth-order polynomial, iris area covered by the UE, iris area covered by the lower eyelid and total iris area covered by both the upper and the lower eyelids were defined using the polynomial function and used in statistical comparisons. The t-test, Mann-Whitney U test and the Spearman's correlation test were used for statistical comparisons. The mathematical expression derived from the data of 50 healthy subjects aged 24.1 ± 2.6 years was defined as y = 22.0915 + (-1.3213)x + 0.0318x(2 )+ (-0.0005x)(3). The fifth and the consecutive coefficients were <0.00001 in all cases and were not included in the polynomial function. None of the first fourth-order coefficients of the equation were found to be significantly different in male versus female subjects. In normal subjects, the percentage of the iris area covered by upper and lower lids was 6.46 ± 5.17% and 0.66% ± 1.62%, respectively. All coefficients and mean iris area covered by the UE were significantly different between healthy and ptotic eyelids. The healthy and abnormal eyelid contour can be defined and differentiated using a polynomial mathematical function.
Measurement of EUV lithography pupil amplitude and phase variation via image-based methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levinson, Zachary; Verduijn, Erik; Wood, Obert R.
2016-04-01
Here, an approach to image-based EUV aberration metrology using binary mask targets and iterative model-based solutions to extract both the amplitude and phase components of the aberrated pupil function is presented. The approach is enabled through previously developed modeling, fitting, and extraction algorithms. We seek to examine the behavior of pupil amplitude variation in real-optical systems. Optimized target images were captured under several conditions to fit the resulting pupil responses. Both the amplitude and phase components of the pupil function were extracted from a zone-plate-based EUV mask microscope. The pupil amplitude variation was expanded in three different bases: Zernike polynomials,more » Legendre polynomials, and Hermite polynomials. It was found that the Zernike polynomials describe pupil amplitude variation most effectively of the three.« less
On Certain Wronskians of Multiple Orthogonal Polynomials
NASA Astrophysics Data System (ADS)
Zhang, Lun; Filipuk, Galina
2014-11-01
We consider determinants of Wronskian type whose entries are multiple orthogonal polynomials associated with a path connecting two multi-indices. By assuming that the weight functions form an algebraic Chebyshev (AT) system, we show that the polynomials represented by the Wronskians keep a constant sign in some cases, while in some other cases oscillatory behavior appears, which generalizes classical results for orthogonal polynomials due to Karlin and Szegő. There are two applications of our results. The first application arises from the observation that the m-th moment of the average characteristic polynomials for multiple orthogonal polynomial ensembles can be expressed as a Wronskian of the type II multiple orthogonal polynomials. Hence, it is straightforward to obtain the distinct behavior of the moments for odd and even m in a special multiple orthogonal ensemble - the AT ensemble. As the second application, we derive some Turán type inequalities for m! ultiple Hermite and multiple Laguerre polynomials (of two kinds). Finally, we study numerically the geometric configuration of zeros for the Wronskians of these multiple orthogonal polynomials. We observe that the zeros have regular configurations in the complex plane, which might be of independent interest.
Investigation of advanced UQ for CRUD prediction with VIPRE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldred, Michael Scott
2011-09-01
This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hounkonnou, Mahouton Norbert; Nkouankam, Elvis Benzo Ngompe
2010-10-15
From the realization of q-oscillator algebra in terms of generalized derivative, we compute the matrix elements from deformed exponential functions and deduce generating functions associated with Rogers-Szego polynomials as well as their relevant properties. We also compute the matrix elements associated with the (p,q)-oscillator algebra (a generalization of the q-one) and perform the Fourier-Gauss transform of a generalization of the deformed exponential functions.
Operational Solution to the Nonlinear Klein-Gordon Equation
NASA Astrophysics Data System (ADS)
Bengochea, G.; Verde-Star, L.; Ortigueira, M.
2018-05-01
We obtain solutions of the nonlinear Klein-Gordon equation using a novel operational method combined with the Adomian polynomial expansion of nonlinear functions. Our operational method does not use any integral transforms nor integration processes. We illustrate the application of our method by solving several examples and present numerical results that show the accuracy of the truncated series approximations to the solutions. Supported by Grant SEP-CONACYT 220603, the first author was supported by SEP-PRODEP through the project UAM-PTC-630, the third author was supported by Portuguese National Funds through the FCT Foundation for Science and Technology under the project PEst-UID/EEA/00066/2013
Polynomial reduction and evaluation of tree- and loop-level CHY amplitudes
Zlotnikov, Michael
2016-08-24
We develop a polynomial reduction procedure that transforms any gauge fixed CHY amplitude integrand for n scattering particles into a σ-moduli multivariate polynomial of what we call the standard form. We show that a standard form polynomial must have a specific ladder type monomial structure, which has finite size at any n, with highest multivariate degree given by (n – 3)(n – 4)/2. This set of monomials spans a complete basis for polynomials with rational coefficients in kinematic data on the support of scattering equations. Subsequently, at tree and one-loop level, we employ the global residue theorem to derive amore » prescription that evaluates any CHY amplitude by means of collecting simple residues at infinity only. Furthermore, the prescription is then applied explicitly to some tree and one-loop amplitude examples.« less
Orthonormal aberration polynomials for anamorphic optical imaging systems with circular pupils.
Mahajan, Virendra N
2012-06-20
In a recent paper, we considered the classical aberrations of an anamorphic optical imaging system with a rectangular pupil, representing the terms of a power series expansion of its aberration function. These aberrations are inherently separable in the Cartesian coordinates (x,y) of a point on the pupil. Accordingly, there is x-defocus and x-coma, y-defocus and y-coma, and so on. We showed that the aberration polynomials orthonormal over the pupil and representing balanced aberrations for such a system are represented by the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point; for example, L(l)(x)L(m)(y), where l and m are positive integers (including zero) and L(l)(x), for example, represents an orthonormal Legendre polynomial of degree l in x. The compound two-dimensional (2D) Legendre polynomials, like the classical aberrations, are thus also inherently separable in the Cartesian coordinates of the pupil point. Moreover, for every orthonormal polynomial L(l)(x)L(m)(y), there is a corresponding orthonormal polynomial L(l)(y)L(m)(x) obtained by interchanging x and y. These polynomials are different from the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil. In this paper, we show that the orthonormal aberration polynomials for an anamorphic system with a circular pupil, obtained by the Gram-Schmidt orthogonalization of the 2D Legendre polynomials, are not separable in the two coordinates. Moreover, for a given polynomial in x and y, there is no corresponding polynomial obtained by interchanging x and y. For example, there are polynomials representing x-defocus, balanced x-coma, and balanced x-spherical aberration, but no corresponding y-aberration polynomials. The missing y-aberration terms are contained in other polynomials. We emphasize that the Zernike circle polynomials, although orthogonal over a circular pupil, are not suitable for an anamorphic system as they do not represent balanced aberrations for such a system.
ERIC Educational Resources Information Center
Caglayan, Günhan
2014-01-01
This study investigates prospective secondary mathematics teachers' visual representations of polynomial and rational inequalities, and graphs of exponential and logarithmic functions with GeoGebra Dynamic Software. Five prospective teachers in a university in the United States participated in this research study, which was situated within a…
ERIC Educational Resources Information Center
Man, Yiu-Kwong
2012-01-01
In this note, a new method for computing the partial fraction decomposition of rational functions with irreducible quadratic factors in the denominators is presented. This method involves polynomial divisions and substitutions only, without having to solve for the complex roots of the irreducible quadratic polynomial or to solve a system of linear…
NASA Astrophysics Data System (ADS)
Hoque, Md. Fazlul; Marquette, Ian; Post, Sarah; Zhang, Yao-Zhong
2018-04-01
We introduce an extended Kepler-Coulomb quantum model in spherical coordinates. The Schrödinger equation of this Hamiltonian is solved in these coordinates and it is shown that the wave functions of the system can be expressed in terms of Laguerre, Legendre and exceptional Jacobi polynomials (of hypergeometric type). We construct ladder and shift operators based on the corresponding wave functions and obtain their recurrence formulas. These recurrence relations are used to construct higher-order, algebraically independent integrals of motion to prove superintegrability of the Hamiltonian. The integrals form a higher rank polynomial algebra. By constructing the structure functions of the associated deformed oscillator algebras we derive the degeneracy of energy spectrum of the superintegrable system.
Volumetric calibration of a plenoptic camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Volumetric calibration of a plenoptic camera
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...
2018-02-01
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Recurrence relations for orthogonal polynomials for PDEs in polar and cylindrical geometries.
Richardson, Megan; Lambers, James V
2016-01-01
This paper introduces two families of orthogonal polynomials on the interval (-1,1), with weight function [Formula: see text]. The first family satisfies the boundary condition [Formula: see text], and the second one satisfies the boundary conditions [Formula: see text]. These boundary conditions arise naturally from PDEs defined on a disk with Dirichlet boundary conditions and the requirement of regularity in Cartesian coordinates. The families of orthogonal polynomials are obtained by orthogonalizing short linear combinations of Legendre polynomials that satisfy the same boundary conditions. Then, the three-term recurrence relations are derived. Finally, it is shown that from these recurrence relations, one can efficiently compute the corresponding recurrences for generalized Jacobi polynomials that satisfy the same boundary conditions.
The NonConforming Virtual Element Method for the Stokes Equations
Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco
2016-01-01
In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco
In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less
Generalized Freud's equation and level densities with polynomial potential
NASA Astrophysics Data System (ADS)
Boobna, Akshat; Ghosh, Saugata
2013-08-01
We study orthogonal polynomials with weight $\\exp[-NV(x)]$, where $V(x)=\\sum_{k=1}^{d}a_{2k}x^{2k}/2k$ is a polynomial of order 2d. We derive the generalised Freud's equations for $d=3$, 4 and 5 and using this obtain $R_{\\mu}=h_{\\mu}/h_{\\mu -1}$, where $h_{\\mu}$ is the normalization constant for the corresponding orthogonal polynomials. Moments of the density functions, expressed in terms of $R_{\\mu}$, are obtained using Freud's equation and using this, explicit results of level densities as $N\\rightarrow\\infty$ are derived.
First Instances of Generalized Expo-Rational Finite Elements on Triangulations
NASA Astrophysics Data System (ADS)
Dechevsky, Lubomir T.; Zanaty, Peter; Laksa˚, Arne; Bang, Børre
2011-12-01
In this communication we consider a construction of simplicial finite elements on triangulated two-dimensional polygonal domains. This construction is, in some sense, dual to the construction of generalized expo-rational B-splines (GERBS). The main result is in the obtaining of new polynomial simplicial patches of the first several lowest possible total polynomial degrees which exhibit Hermite interpolatory properties. The derivation of these results is based on the theory of piecewise polynomial GERBS called Euler Beta-function B-splines. We also provide 3-dimensional visualization of the graphs of the new polynomial simplicial patches and their control polygons.
Near Real-Time Closed-Loop Optimal Control Feedback for Spacecraft Attitude Maneuvers
2009-03-01
60 3.8 Positive ωi Static Thrust Fan Characterization Polynomial Coefficients . . 62 3.9 Negative ωi Static Thrust Fan...Characterization Polynomial Coefficients . 62 4.1 Coefficients for SimSAT II’s Air Drag Polynomial Function . . . . . . . . . . . 78 5.1 OLOC Simulation...maneuver. Researchers using OCT identified that naturally occurring aerodynamic drag and gravity forces could be exploited in such a way that the CMGs
On the best mean-square approximations to a planet's gravitational potential
NASA Astrophysics Data System (ADS)
Lobkova, N. I.
1985-02-01
The continuous problem of approximating the gravitational potential of a planet in the form of polynomials of solid spherical functions is considered. The best mean-square polynomials, referred to different parts of space, are compared with each other. The harmonic coefficients corresponding to the surface of a planet are shown to be unstable with respect to the degree of the polynomial and to differ from the Stokes constants.
On direct theorems for best polynomial approximation
NASA Astrophysics Data System (ADS)
Auad, A. A.; AbdulJabbar, R. S.
2018-05-01
This paper is to obtain similarity for the best approximation degree of functions, which are unbounded in L p,α (A = [0,1]), which called weighted space by algebraic polynomials. {E}nH{(f)}p,α and the best approximation degree in the same space on the interval [0,2π] by trigonometric polynomials {E}nT{(f)}p,α of direct wellknown theorems in forms the average modules.
DIFFERENTIAL CROSS SECTION ANALYSIS IN KAON PHOTOPRODUCTION USING ASSOCIATED LEGENDRE POLYNOMIALS
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. T. P. HUTAURUK, D. G. IRELAND, G. ROSNER
2009-04-01
Angular distributions of differential cross sections from the latest CLAS data sets,6 for the reaction γ + p→K+ + Λ have been analyzed using associated Legendre polynomials. This analysis is based upon theoretical calculations in Ref. 1 where all sixteen observables in kaon photoproduction can be classified into four Legendre classes. Each observable can be described by an expansion of associated Legendre polynomial functions. One of the questions to be addressed is how many associated Legendre polynomials are required to describe the data. In this preliminary analysis, we used data models with different numbers of associated Legendre polynomials. We thenmore » compared these models by calculating posterior probabilities of the models. We found that the CLAS data set needs no more than four associated Legendre polynomials to describe the differential cross section data. In addition, we also show the extracted coefficients of the best model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olesov, A V
2014-10-31
New inequalities are established for analytic functions satisfying Meiman's majorization conditions. Estimates for values of and differential inequalities involving rational trigonometric functions with an integer majorant on an interval of length less than the period and with prescribed poles which are symmetrically positioned relative to the real axis, as well as differential inequalities for trigonometric polynomials in some classes, are given as applications. These results improve several theorems due to Meiman, Genchev, Smirnov and Rusak. Bibliography: 27 titles.
The use of rational functions in numerical quadrature
NASA Astrophysics Data System (ADS)
Gautschi, Walter
2001-08-01
Quadrature problems involving functions that have poles outside the interval of integration can profitably be solved by methods that are exact not only for polynomials of appropriate degree, but also for rational functions having the same (or the most important) poles as the function to be integrated. Constructive and computational tools for accomplishing this are described and illustrated in a number of quadrature contexts. The superiority of such rational/polynomial methods is shown by an analysis of the remainder term and documented by numerical examples.
Orthogonal basis with a conicoid first mode for shape specification of optical surfaces.
Ferreira, Chelo; López, José L; Navarro, Rafael; Sinusía, Ester Pérez
2016-03-07
A rigorous and powerful theoretical framework is proposed to obtain systems of orthogonal functions (or shape modes) to represent optical surfaces. The method is general so it can be applied to different initial shapes and different polynomials. Here we present results for surfaces with circular apertures when the first basis function (mode) is a conicoid. The system for aspheres with rotational symmetry is obtained applying an appropriate change of variables to Legendre polynomials, whereas the system for general freeform case is obtained applying a similar procedure to spherical harmonics. Numerical comparisons with standard systems, such as Forbes and Zernike polynomials, are performed and discussed.
Global stability and quadratic Hamiltonian structure in Lotka-Volterra and quasi-polynomial systems
NASA Astrophysics Data System (ADS)
Szederkényi, Gábor; Hangos, Katalin M.
2004-04-01
We show that the global stability of quasi-polynomial (QP) and Lotka-Volterra (LV) systems with the well-known logarithmic Lyapunov function is equivalent to the existence of a local generalized dissipative Hamiltonian description of the LV system with a diagonal quadratic form as a Hamiltonian function. The Hamiltonian function can be calculated and the quadratic dissipativity neighborhood of the origin can be estimated by solving linear matrix inequalities.
Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment
NASA Astrophysics Data System (ADS)
Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty
2017-12-01
Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.
A Formally Verified Conflict Detection Algorithm for Polynomial Trajectories
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony; Munoz, Cesar
2015-01-01
In air traffic management, conflict detection algorithms are used to determine whether or not aircraft are predicted to lose horizontal and vertical separation minima within a time interval assuming a trajectory model. In the case of linear trajectories, conflict detection algorithms have been proposed that are both sound, i.e., they detect all conflicts, and complete, i.e., they do not present false alarms. In general, for arbitrary nonlinear trajectory models, it is possible to define detection algorithms that are either sound or complete, but not both. This paper considers the case of nonlinear aircraft trajectory models based on polynomial functions. In particular, it proposes a conflict detection algorithm that precisely determines whether, given a lookahead time, two aircraft flying polynomial trajectories are in conflict. That is, it has been formally verified that, assuming that the aircraft trajectories are modeled as polynomial functions, the proposed algorithm is both sound and complete.
Hilbert's 17th Problem and the Quantumness of States
NASA Astrophysics Data System (ADS)
Korbicz, J. K.; Cirac, J. I.; Wehr, Jan; Lewenstein, M.
2005-04-01
A state of a quantum system can be regarded as classical (quantum) with respect to measurements of a set of canonical observables if and only if there exists (does not exist) a well defined, positive phase-space distribution, the so called Glauber-Sudarshan P representation. We derive a family of classicality criteria that requires that the averages of positive functions calculated using P representation must be positive. For polynomial functions, these criteria are related to Hilbert’s 17th problem, and have physical meaning of generalized squeezing conditions; alternatively, they may be interpreted as nonclassicality witnesses. We show that every generic nonclassical state can be detected by a polynomial that is a sum-of-squares of other polynomials. We introduce a very natural hierarchy of states regarding their degree of quantumness, which we relate to the minimal degree of a sum-of-squares polynomial that detects them.
Concentration of the L{sub 1}-norm of trigonometric polynomials and entire functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malykhin, Yu V; Ryutin, K S
2014-11-30
For any sufficiently large n, the minimal measure of a subset of [−π,π] on which some nonzero trigonometric polynomial of order ≤n gains half of the L{sub 1}-norm is shown to be π/(n+1). A similar result for entire functions of exponential type is established. Bibliography: 13 titles.
Du, Yuncheng; Budman, Hector M; Duever, Thomas A
2016-06-01
Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.
Teichert, Gregory H.; Gunda, N. S. Harsha; Rudraraju, Shiva; ...
2016-12-18
Free energies play a central role in many descriptions of equilibrium and non-equilibrium properties of solids. Continuum partial differential equations (PDEs) of atomic transport, phase transformations and mechanics often rely on first and second derivatives of a free energy function. The stability, accuracy and robustness of numerical methods to solve these PDEs are sensitive to the particular functional representations of the free energy. In this communication we investigate the influence of different representations of thermodynamic data on phase field computations of diffusion and two-phase reactions in the solid state. First-principles statistical mechanics methods were used to generate realistic free energymore » data for HCP titanium with interstitially dissolved oxygen. While Redlich-Kister polynomials have formed the mainstay of thermodynamic descriptions of multi-component solids, they require high order terms to fit oscillations in chemical potentials around phase transitions. Here, we demonstrate that high fidelity fits to rapidly fluctuating free energy functions are obtained with spline functions. As a result, spline functions that are many degrees lower than Redlich-Kister polynomials provide equal or superior fits to chemical potential data and, when used in phase field computations, result in solution times approaching an order of magnitude speed up relative to the use of Redlich-Kister polynomials.« less
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.
A Christoffel function weighted least squares algorithm for collocation approximations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayan, Akil; Jakeman, John D.; Zhou, Tao
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
A Christoffel function weighted least squares algorithm for collocation approximations
Narayan, Akil; Jakeman, John D.; Zhou, Tao
2016-11-28
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm
Svečko, Rajko
2014-01-01
This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749
Guaranteed cost control of polynomial fuzzy systems via a sum of squares approach.
Tanaka, Kazuo; Ohtake, Hiroshi; Wang, Hua O
2009-04-01
This paper presents the guaranteed cost control of polynomial fuzzy systems via a sum of squares (SOS) approach. First, we present a polynomial fuzzy model and controller that are more general representations of the well-known Takagi-Sugeno (T-S) fuzzy model and controller, respectively. Second, we derive a guaranteed cost control design condition based on polynomial Lyapunov functions. Hence, the design approach discussed in this paper is more general than the existing LMI approaches (to T-S fuzzy control system designs) based on quadratic Lyapunov functions. The design condition realizes a guaranteed cost control by minimizing the upper bound of a given performance function. In addition, the design condition in the proposed approach can be represented in terms of SOS and is numerically (partially symbolically) solved via the recent developed SOSTOOLS. To illustrate the validity of the design approach, two design examples are provided. The first example deals with a complicated nonlinear system. The second example presents micro helicopter control. Both the examples show that our approach provides more extensive design results for the existing LMI approach.
On Convergence Aspects of Spheroidal Monogenics
NASA Astrophysics Data System (ADS)
Georgiev, S.; Morais, J.
2011-09-01
Orthogonal polynomials have found wide applications in mathematical physics, numerical analysis, and other fields. Accordingly there is an enormous amount of variety of such polynomials and relations that describe their properties. The paper's main results are the discussion of approximation properties for monogenic functions over prolate spheroids in R3 in terms of orthogonal monogenic polynomials and their interdependences. Certain results are stated without proof for now. The motivation for the present study stems from the fact that these polynomials play an important role in the calculation of the Bergman kernel and Green's monogenic functions in a spheroid. Once these functions are known, it is possible to solve both basic boundary value and conformal mapping problems. Interestingly, most of the used methods have a n-dimensional counterpart and can be extended to arbitrary ellipsoids. But such a procedure would make the further study of the underlying ellipsoidal monogenics somewhat laborious, and for this reason we shall not discuss these general cases here. To the best of our knowledge, this does not appear to have been done in literature before.
The Maximums and Minimums of a Polnomial or Maximizing Profits and Minimizing Aircraft Losses.
ERIC Educational Resources Information Center
Groves, Brenton R.
1984-01-01
Plotting a polynomial over the range of real numbers when its derivative contains complex roots is discussed. The polynomials are graphed by calculating the minimums, maximums, and zeros of the function. (MNS)
Xie, Xiangpeng; Yue, Dong; Zhang, Huaguang; Xue, Yusheng
2016-03-01
This paper deals with the problem of control synthesis of discrete-time Takagi-Sugeno fuzzy systems by employing a novel multiinstant homogenous polynomial approach. A new multiinstant fuzzy control scheme and a new class of fuzzy Lyapunov functions, which are homogenous polynomially parameter-dependent on both the current-time normalized fuzzy weighting functions and the past-time normalized fuzzy weighting functions, are proposed for implementing the object of relaxed control synthesis. Then, relaxed stabilization conditions are derived with less conservatism than existing ones. Furthermore, the relaxation quality of obtained stabilization conditions is further ameliorated by developing an efficient slack variable approach, which presents a multipolynomial dependence on the normalized fuzzy weighting functions at the current and past instants of time. Two simulation examples are given to demonstrate the effectiveness and benefits of the results developed in this paper.
Williams, Jennifer Stewart
2011-07-01
To show how fractional polynomial methods can usefully replace the practice of arbitrarily categorizing data in epidemiology and health services research. A health service setting is used to illustrate a structured and transparent way of representing non-linear data without arbitrary grouping. When age is a regressor its effects on an outcome will be interpreted differently depending upon the placing of cutpoints or the use of a polynomial transformation. Although it is common practice, categorization comes at a cost. Information is lost, and accuracy and statistical power reduced, leading to spurious statistical interpretation of the data. The fractional polynomial method is widely supported by statistical software programs, and deserves greater attention and use.
Bignardi, A B; El Faro, L; Cardoso, V L; Machado, P F; Albuquerque, L G
2009-09-01
The objective of the present study was to estimate milk yield genetic parameters applying random regression models and parametric correlation functions combined with a variance function to model animal permanent environmental effects. A total of 152,145 test-day milk yields from 7,317 first lactations of Holstein cows belonging to herds located in the southeastern region of Brazil were analyzed. Test-day milk yields were divided into 44 weekly classes of days in milk. Contemporary groups were defined by herd-test-day comprising a total of 2,539 classes. The model included direct additive genetic, permanent environmental, and residual random effects. The following fixed effects were considered: contemporary group, age of cow at calving (linear and quadratic regressions), and the population average lactation curve modeled by fourth-order orthogonal Legendre polynomial. Additive genetic effects were modeled by random regression on orthogonal Legendre polynomials of days in milk, whereas permanent environmental effects were estimated using a stationary or nonstationary parametric correlation function combined with a variance function of different orders. The structure of residual variances was modeled using a step function containing 6 variance classes. The genetic parameter estimates obtained with the model using a stationary correlation function associated with a variance function to model permanent environmental effects were similar to those obtained with models employing orthogonal Legendre polynomials for the same effect. A model using a sixth-order polynomial for additive effects and a stationary parametric correlation function associated with a seventh-order variance function to model permanent environmental effects would be sufficient for data fitting.
Distortion theorems for polynomials on a circle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubinin, V N
2000-12-31
Inequalities for the derivatives with respect to {phi}=arg z the functions ReP(z), |P(z)|{sup 2} and arg P(z) are established for an algebraic polynomial P(z) at points on the circle |z|=1. These estimates depend, in particular, on the constant term and the leading coefficient of the polynomial P(z) and improve the classical Bernstein and Turan inequalities. The method of proof is based on the techniques of generalized reduced moduli.
Mashayekhi, S; Razzaghi, M; Tripak, O
2014-01-01
A new numerical method for solving the nonlinear mixed Volterra-Fredholm integral equations is presented. This method is based upon hybrid functions approximation. The properties of hybrid functions consisting of block-pulse functions and Bernoulli polynomials are presented. The operational matrices of integration and product are given. These matrices are then utilized to reduce the nonlinear mixed Volterra-Fredholm integral equations to the solution of algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique.
Mashayekhi, S.; Razzaghi, M.; Tripak, O.
2014-01-01
A new numerical method for solving the nonlinear mixed Volterra-Fredholm integral equations is presented. This method is based upon hybrid functions approximation. The properties of hybrid functions consisting of block-pulse functions and Bernoulli polynomials are presented. The operational matrices of integration and product are given. These matrices are then utilized to reduce the nonlinear mixed Volterra-Fredholm integral equations to the solution of algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique. PMID:24523638
Supervised nonlinear spectral unmixing using a postnonlinear mixing model for hyperspectral imagery.
Altmann, Yoann; Halimi, Abderrahim; Dobigeon, Nicolas; Tourneret, Jean-Yves
2012-06-01
This paper presents a nonlinear mixing model for hyperspectral image unmixing. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated using polynomial functions leading to a polynomial postnonlinear mixing model. A Bayesian algorithm and optimization methods are proposed to estimate the parameters involved in the model. The performance of the unmixing strategies is evaluated by simulations conducted on synthetic and real data.
Modeling corneal surfaces with rational functions for high-speed videokeratoscopy data compression.
Schneider, Martin; Iskander, D Robert; Collins, Michael J
2009-02-01
High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.
Recursive approach to the moment-based phase unwrapping method.
Langley, Jason A; Brice, Robert G; Zhao, Qun
2010-06-01
The moment-based phase unwrapping algorithm approximates the phase map as a product of Gegenbauer polynomials, but the weight function for the Gegenbauer polynomials generates artificial singularities along the edge of the phase map. A method is presented to remove the singularities inherent to the moment-based phase unwrapping algorithm by approximating the phase map as a product of two one-dimensional Legendre polynomials and applying a recursive property of derivatives of Legendre polynomials. The proposed phase unwrapping algorithm is tested on simulated and experimental data sets. The results are then compared to those of PRELUDE 2D, a widely used phase unwrapping algorithm, and a Chebyshev-polynomial-based phase unwrapping algorithm. It was found that the proposed phase unwrapping algorithm provides results that are comparable to those obtained by using PRELUDE 2D and the Chebyshev phase unwrapping algorithm.
Graph characterization via Ihara coefficients.
Ren, Peng; Wilson, Richard C; Hancock, Edwin R
2011-02-01
The novel contributions of this paper are twofold. First, we demonstrate how to characterize unweighted graphs in a permutation-invariant manner using the polynomial coefficients from the Ihara zeta function, i.e., the Ihara coefficients. Second, we generalize the definition of the Ihara coefficients to edge-weighted graphs. For an unweighted graph, the Ihara zeta function is the reciprocal of a quasi characteristic polynomial of the adjacency matrix of the associated oriented line graph. Since the Ihara zeta function has poles that give rise to infinities, the most convenient numerically stable representation is to work with the coefficients of the quasi characteristic polynomial. Moreover, the polynomial coefficients are invariant to vertex order permutations and also convey information concerning the cycle structure of the graph. To generalize the representation to edge-weighted graphs, we make use of the reduced Bartholdi zeta function. We prove that the computation of the Ihara coefficients for unweighted graphs is a special case of our proposed method for unit edge weights. We also present a spectral analysis of the Ihara coefficients and indicate their advantages over other graph spectral methods. We apply the proposed graph characterization method to capturing graph-class structure and clustering graphs. Experimental results reveal that the Ihara coefficients are more effective than methods based on Laplacian spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zlotnikov, Michael
We develop a polynomial reduction procedure that transforms any gauge fixed CHY amplitude integrand for n scattering particles into a σ-moduli multivariate polynomial of what we call the standard form. We show that a standard form polynomial must have a specific ladder type monomial structure, which has finite size at any n, with highest multivariate degree given by (n – 3)(n – 4)/2. This set of monomials spans a complete basis for polynomials with rational coefficients in kinematic data on the support of scattering equations. Subsequently, at tree and one-loop level, we employ the global residue theorem to derive amore » prescription that evaluates any CHY amplitude by means of collecting simple residues at infinity only. Furthermore, the prescription is then applied explicitly to some tree and one-loop amplitude examples.« less
Distortion theorems for polynomials on a circle
NASA Astrophysics Data System (ADS)
Dubinin, V. N.
2000-12-01
Inequalities for the derivatives with respect to \\varphi=\\arg z the functions \\operatorname{Re}P(z), \\vert P(z)\\vert^2 and \\arg P(z) are established for an algebraic polynomial P(z) at points on the circle \\vert z\\vert=1. These estimates depend, in particular, on the constant term and the leading coefficient of the polynomial P(z) and improve the classical Bernstein and Turan inequalities. The method of proof is based on the techniques of generalized reduced moduli.
CKP Hierarchy, Bosonic Tau Function and Bosonization Formulae
NASA Astrophysics Data System (ADS)
van de Leur, Johan W.; Orlov, Alexander Yu.; Shiota, Takahiro
2012-06-01
We develop the theory of CKP hierarchy introduced in the papers of Kyoto school [Date E., Jimbo M., Kashiwara M., Miwa T., J. Phys. Soc. Japan 50 (1981), 3806-3812] (see also [Kac V.G., van de Leur J.W., Adv. Ser. Math. Phys., Vol. 7, World Sci. Publ., Teaneck, NJ, 1989, 369-406]). We present appropriate bosonization formulae. We show that in the context of the CKP theory certain orthogonal polynomials appear. These polynomials are polynomial both in even and odd (in Grassmannian sense) variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, John D.; Narayan, Akil; Zhou, Tao
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
Equations on knot polynomials and 3d/5d duality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mironov, A.; Morozov, A.; ITEP, Moscow
2012-09-24
We briefly review the current situation with various relations between knot/braid polynomials (Chern-Simons correlation functions), ordinary and extended, considered as functions of the representation and of the knot topology. These include linear skein relations, quadratic Plucker relations, as well as 'differential' and (quantum) A-polynomial structures. We pay a special attention to identity between the A-polynomial equations for knots and Baxter equations for quantum relativistic integrable systems, related through Seiberg-Witten theory to 5d super-Yang-Mills models and through the AGT relation to the q-Virasoro algebra. This identity is an important ingredient of emerging a 3d- 5d generalization of the AGT relation. Themore » shape of the Baxter equation (including the values of coefficients) depend on the choice of the knot/braid. Thus, like the case of KP integrability, where (some, so far torus) knots parameterize particular points of the Universal Grassmannian, in this relation they parameterize particular points in the moduli space of many-body integrable systems of relativistic type.« less
Colored knot polynomials for arbitrary pretzel knots and links
Galakhov, D.; Melnikov, D.; Mironov, A.; ...
2015-04-01
A very simple expression is conjectured for arbitrary colored Jones and HOMFLY polynomials of a rich (g+1)-parametric family of pretzel knots and links. The answer for the Jones and HOMFLY is fully and explicitly expressed through the Racah matrix of Uq(SU N), and looks related to a modular transformation of toric conformal block. Knot polynomials are among the hottest topics in modern theory. They are supposed to summarize nicely representation theory of quantum algebras and modular properties of conformal blocks. The result reported in the present letter, provides a spectacular illustration and support to this general expectation.
Phase demodulation method from a single fringe pattern based on correlation with a polynomial form.
Robin, Eric; Valle, Valéry; Brémand, Fabrice
2005-12-01
The method presented extracts the demodulated phase from only one fringe pattern. Locally, this method approaches the fringe pattern morphology with the help of a mathematical model. The degree of similarity between the mathematical model and the real fringe is estimated by minimizing a correlation function. To use an optimization process, we have chosen a polynomial form such as a mathematical model. However, the use of a polynomial form induces an identification procedure with the purpose of retrieving the demodulated phase. This method, polynomial modulated phase correlation, is tested on several examples. Its performance, in terms of speed and precision, is presented on very noised fringe patterns.
Random regression models using different functions to model milk flow in dairy cows.
Laureano, M M M; Bignardi, A B; El Faro, L; Cardoso, V L; Tonhati, H; Albuquerque, L G
2014-09-12
We analyzed 75,555 test-day milk flow records from 2175 primiparous Holstein cows that calved between 1997 and 2005. Milk flow was obtained by dividing the mean milk yield (kg) of the 3 daily milking by the total milking time (min) and was expressed as kg/min. Milk flow was grouped into 43 weekly classes. The analyses were performed using a single-trait Random Regression Models that included direct additive genetic, permanent environmental, and residual random effects. In addition, the contemporary group and linear and quadratic effects of cow age at calving were included as fixed effects. Fourth-order orthogonal Legendre polynomial of days in milk was used to model the mean trend in milk flow. The additive genetic and permanent environmental covariance functions were estimated using random regression Legendre polynomials and B-spline functions of days in milk. The model using a third-order Legendre polynomial for additive genetic effects and a sixth-order polynomial for permanent environmental effects, which contained 7 residual classes, proved to be the most adequate to describe variations in milk flow, and was also the most parsimonious. The heritability in milk flow estimated by the most parsimonious model was of moderate to high magnitude.
Homogenous polynomially parameter-dependent H∞ filter designs of discrete-time fuzzy systems.
Zhang, Huaguang; Xie, Xiangpeng; Tong, Shaocheng
2011-10-01
This paper proposes a novel H(∞) filtering technique for a class of discrete-time fuzzy systems. First, a novel kind of fuzzy H(∞) filter, which is homogenous polynomially parameter dependent on membership functions with an arbitrary degree, is developed to guarantee the asymptotic stability and a prescribed H(∞) performance of the filtering error system. Second, relaxed conditions for H(∞) performance analysis are proposed by using a new fuzzy Lyapunov function and the Finsler lemma with homogenous polynomial matrix Lagrange multipliers. Then, based on a new kind of slack variable technique, relaxed linear matrix inequality-based H(∞) filtering conditions are proposed. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed approach.
NASA Technical Reports Server (NTRS)
Morelli, E. A.; Proffitt, M. S.
1999-01-01
The data for longitudinal non-dimensional, aerodynamic coefficients in the High Speed Research Cycle 2B aerodynamic database were modeled using polynomial expressions identified with an orthogonal function modeling technique. The discrepancy between the tabular aerodynamic data and the polynomial models was tested and shown to be less than 15 percent for drag, lift, and pitching moment coefficients over the entire flight envelope. Most of this discrepancy was traced to smoothing local measurement noise and to the omission of mass case 5 data in the modeling process. A simulation check case showed that the polynomial models provided a compact and accurate representation of the nonlinear aerodynamic dependencies contained in the HSR Cycle 2B tabular aerodynamic database.
Analysis on the misalignment errors between Hartmann-Shack sensor and 45-element deformable mirror
NASA Astrophysics Data System (ADS)
Liu, Lihui; Zhang, Yi; Tao, Jianjun; Cao, Fen; Long, Yin; Tian, Pingchuan; Chen, Shangwu
2017-02-01
Aiming at 45-element adaptive optics system, the model of 45-element deformable mirror is truly built by COMSOL Multiphysics, and every actuator's influence function is acquired by finite element method. The process of this system correcting optical aberration is simulated by making use of procedure, and aiming for Strehl ratio of corrected diffraction facula, in the condition of existing different translation and rotation error between Hartmann-Shack sensor and deformable mirror, the system's correction ability for 3-20 Zernike polynomial wave aberration is analyzed. The computed result shows: the system's correction ability for 3-9 Zernike polynomial wave aberration is higher than that of 10-20 Zernike polynomial wave aberration. The correction ability for 3-20 Zernike polynomial wave aberration does not change with misalignment error changing. With rotation error between Hartmann-Shack sensor and deformable mirror increasing, the correction ability for 3-20 Zernike polynomial wave aberration gradually goes down, and with translation error increasing, the correction ability for 3-9 Zernike polynomial wave aberration gradually goes down, but the correction ability for 10-20 Zernike polynomial wave aberration behave up-and-down depression.
Stability analysis of fuzzy parametric uncertain systems.
Bhiwani, R J; Patre, B M
2011-10-01
In this paper, the determination of stability margin, gain and phase margin aspects of fuzzy parametric uncertain systems are dealt. The stability analysis of uncertain linear systems with coefficients described by fuzzy functions is studied. A complexity reduced technique for determining the stability margin for FPUS is proposed. The method suggested is dependent on the order of the characteristic polynomial. In order to find the stability margin of interval polynomials of order less than 5, it is not always necessary to determine and check all four Kharitonov's polynomials. It has been shown that, for determining stability margin of FPUS of order five, four, and three we require only 3, 2, and 1 Kharitonov's polynomials respectively. Only for sixth and higher order polynomials, a complete set of Kharitonov's polynomials are needed to determine the stability margin. Thus for lower order systems, the calculations are reduced to a large extent. This idea has been extended to determine the stability margin of fuzzy interval polynomials. It is also shown that the gain and phase margin of FPUS can be determined analytically without using graphical techniques. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Constructing a polynomial whose nodal set is the three-twist knot 52
NASA Astrophysics Data System (ADS)
Dennis, Mark R.; Bode, Benjamin
2017-06-01
We describe a procedure that creates an explicit complex-valued polynomial function of three-dimensional space, whose nodal lines are the three-twist knot 52. The construction generalizes a similar approach for lemniscate knots: a braid representation is engineered from finite Fourier series and then considered as the nodal set of a certain complex polynomial which depends on an additional parameter. For sufficiently small values of this parameter, the nodal lines form the three-twist knot. Further mathematical properties of this map are explored, including the relationship of the phase critical points with the Morse-Novikov number, which is nonzero as this knot is not fibred. We also find analogous functions for other simple knots and links. The particular function we find, and the general procedure, should be useful for designing knotted fields of particular knot types in various physical systems.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1974-01-01
A study is made of the extent to which the size of the sample affects the accuracy of a quadratic or a cubic polynomial approximation of an experimentally observed quantity, and the trend with regard to improvement in the accuracy of the approximation as a function of sample size is established. The task is made possible through a simulated analysis carried out by the Monte Carlo method in which data are simulated by using several transcendental or algebraic functions as models. Contaminated data of varying amounts are fitted to either quadratic or cubic polynomials, and the behavior of the mean-squared error of the residual variance is determined as a function of sample size. Results indicate that the effect of the size of the sample is significant only for relatively small sizes and diminishes drastically for moderate and large amounts of experimental data.
Abd-Elhameed, W. M.
2014-01-01
This paper is concerned with deriving some new formulae expressing explicitly the high-order derivatives of Jacobi polynomials whose parameters difference is one or two of any degree and of any order in terms of their corresponding Jacobi polynomials. The derivatives formulae for Chebyshev polynomials of third and fourth kinds of any degree and of any order in terms of their corresponding Chebyshev polynomials are deduced as special cases. Some new reduction formulae for summing some terminating hypergeometric functions of unit argument are also deduced. As an application, and with the aid of the new introduced derivatives formulae, an algorithm for solving special sixth-order boundary value problems are implemented with the aid of applying Galerkin method. A numerical example is presented hoping to ascertain the validity and the applicability of the proposed algorithms. PMID:25386599
Genetic parameters of legendre polynomials for first parity lactation curves.
Pool, M H; Janss, L L; Meuwissen, T H
2000-11-01
Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.
Efficient computer algebra algorithms for polynomial matrices in control design
NASA Technical Reports Server (NTRS)
Baras, J. S.; Macenany, D. C.; Munach, R.
1989-01-01
The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.
Developing the Polynomial Expressions for Fields in the ITER Tokamak
NASA Astrophysics Data System (ADS)
Sharma, Stephen
2017-10-01
The two most important problems to be solved in the development of working nuclear fusion power plants are: sustained partial ignition and turbulence. These two phenomena are the subject of research and investigation through the development of analytic functions and computational models. Ansatz development through Gaussian wave-function approximations, dielectric quark models, field solutions using new elliptic functions, and better descriptions of the polynomials of the superconducting current loops are the critical theoretical developments that need to be improved. Euler-Lagrange equations of motion in addition to geodesic formulations generate the particle model which should correspond to the Dirac dispersive scattering coefficient calculations and the fluid plasma model. Feynman-Hellman formalism and Heaviside step functional forms are introduced to the fusion equations to produce simple expressions for the kinetic energy and loop currents. Conclusively, a polynomial description of the current loops, the Biot-Savart field, and the Lagrangian must be uncovered before there can be an adequate computational and iterative model of the thermonuclear plasma.
Developing and Using an Applet to Enrich Students' Concept Image of Rational Polynomials
ERIC Educational Resources Information Center
Mason, John
2015-01-01
This article draws on extensive experience working with secondary and tertiary teachers and educators using an applet to display rational polynomials (up to degree 7 in numerator and denominator), as support for the challenge to deduce as much as possible about the graph from the graphs of the numerator and the denominator. Pedagogical and design…
Prediction of Spirometric Forced Expiratory Volume (FEV1) Data Using Support Vector Regression
NASA Astrophysics Data System (ADS)
Kavitha, A.; Sujatha, C. M.; Ramakrishnan, S.
2010-01-01
In this work, prediction of forced expiratory volume in 1 second (FEV1) in pulmonary function test is carried out using the spirometer and support vector regression analysis. Pulmonary function data are measured with flow volume spirometer from volunteers (N=175) using a standard data acquisition protocol. The acquired data are then used to predict FEV1. Support vector machines with polynomial kernel function with four different orders were employed to predict the values of FEV1. The performance is evaluated by computing the average prediction accuracy for normal and abnormal cases. Results show that support vector machines are capable of predicting FEV1 in both normal and abnormal cases and the average prediction accuracy for normal subjects was higher than that of abnormal subjects. Accuracy in prediction was found to be high for a regularization constant of C=10. Since FEV1 is the most significant parameter in the analysis of spirometric data, it appears that this method of assessment is useful in diagnosing the pulmonary abnormalities with incomplete data and data with poor recording.
From r-spin intersection numbers to Hodge integrals
NASA Astrophysics Data System (ADS)
Ding, Xiang-Mao; Li, Yuping; Meng, Lingxian
2016-01-01
Generalized Kontsevich Matrix Model (GKMM) with a certain given potential is the partition function of r-spin intersection numbers. We represent this GKMM in terms of fermions and expand it in terms of the Schur polynomials by boson-fermion correspondence, and link it with a Hurwitz partition function and a Hodge partition by operators in a widehat{GL}(∞) group. Then, from a W 1+∞ constraint of the partition function of r-spin intersection numbers, we get a W 1+∞ constraint for the Hodge partition function. The W 1+∞ constraint completely determines the Schur polynomials expansion of the Hodge partition function.
Calculators and Polynomial Evaluation.
ERIC Educational Resources Information Center
Weaver, J. F.
The intent of this paper is to suggest and illustrate how electronic hand-held calculators, especially non-programmable ones with limited data-storage capacity, can be used to advantage by students in one particular aspect of work with polynomial functions. The basic mathematical background upon which calculator application is built is summarized.…
On Partial Fraction Decompositions by Repeated Polynomial Divisions
ERIC Educational Resources Information Center
Man, Yiu-Kwong
2017-01-01
We present a method for finding partial fraction decompositions of rational functions with linear or quadratic factors in the denominators by means of repeated polynomial divisions. This method does not involve differentiation or solving linear equations for obtaining the unknown partial fraction coefficients, which is very suitable for either…
On generalized Melvin solution for the Lie algebra E_6
NASA Astrophysics Data System (ADS)
Bolokhov, S. V.; Ivashchuk, V. D.
2017-10-01
A multidimensional generalization of Melvin's solution for an arbitrary simple Lie algebra G is considered. The gravitational model in D dimensions, D ≥ 4, contains n 2-forms and l ≥ n scalar fields, where n is the rank of G. The solution is governed by a set of n functions H_s(z) obeying n ordinary differential equations with certain boundary conditions imposed. It was conjectured earlier that these functions should be polynomials (the so-called fluxbrane polynomials). The polynomials H_s(z), s = 1,\\ldots ,6, for the Lie algebra E_6 are obtained and a corresponding solution for l = n = 6 is presented. The polynomials depend upon integration constants Q_s, s = 1,\\ldots ,6. They obey symmetry and duality identities. The latter ones are used in deriving asymptotic relations for solutions at large distances. The power-law asymptotic relations for E_6-polynomials at large z are governed by the integer-valued matrix ν = A^{-1} (I + P), where A^{-1} is the inverse Cartan matrix, I is the identity matrix and P is a permutation matrix, corresponding to a generator of the Z_2-group of symmetry of the Dynkin diagram. The 2-form fluxes Φ ^s, s = 1,\\ldots ,6, are calculated.
Coupled Waves on a Periodically Supported Timoshenko Beam
NASA Astrophysics Data System (ADS)
HECKL, MARIA A.
2002-05-01
A mathematical model is presented for the propagation of structural waves on an infinitely long, periodically supported Timoshenko beam. The wave types that can exist on the beam are bending waves with displacements in the horizontal and vertical directions, compressional waves and torsional waves. These waves are affected by the periodic supports in two ways: their dispersion relation spectra show passing and stopping bands, and coupling of the different wave types tends to occur. The model in this paper could represent a railway track where the beam represents the rail and an appropriately chosen support type represents the pad/sleeper/ballast system of a railway track. Hamilton's principle is used to calculate the Green function matrix of the free Timoshenko beam without supports. The supports are incorporated into the model by combining the Green function matrix with the superposition principle. Bloch's theorem is applied to describe the periodicity of the supports. This leads to polynomials with several solutions for the Bloch wave number. These solutions are obtained numerically for different combinations of wave types. Two support types are examined in detail: mass supports and spring supports. More complex support types, such as mass/spring systems, can be incorporated easily into the model.
Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G
2011-06-28
We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of weight records. The results indicate that random regression models provide more accurate expected breeding values than the traditionally finite multi-trait models. Thus, higher genetic responses are expected for beef cattle growth traits by replacing a multi-trait model with random regression models for genetic evaluation. B-spline functions could be applied as an alternative to Legendre polynomials to model covariance functions for weights from birth to mature age.
Local zeta factors and geometries under Spec Z
NASA Astrophysics Data System (ADS)
Manin, Yu I.
2016-08-01
The first part of this note shows that the odd-period polynomial of each Hecke cusp eigenform for the full modular group produces via the Rodriguez-Villegas transform ([1]) a polynomial satisfying the functional equation of zeta type and having non-trivial zeros only in the middle line of its critical strip. The second part discusses the Chebyshev lambda-structure of the polynomial ring as Borger's descent data to \\mathbf{F}_1 and suggests its role in a possible relation of the Γ\\mathbf{R}-factor to 'real geometry over \\mathbf{F}_1' (cf. [2]).
The neighbourhood polynomial of some families of dendrimers
NASA Astrophysics Data System (ADS)
Nazri Husin, Mohamad; Hasni, Roslan
2018-04-01
The neighbourhood polynomial N(G,x) is generating function for the number of faces of each cardinality in the neighbourhood complex of a graph and it is defined as (G,x)={\\sum }U\\in N(G){x}|U|, where N(G) is neighbourhood complex of a graph, whose vertices of the graph and faces are subsets of vertices that have a common neighbour. A dendrimers is an artificially manufactured or synthesized molecule built up from branched units called monomers. In this paper, we compute this polynomial for some families of dendrimer.
Kent, Stephen M.
2018-02-15
If the optical system of a telescope is perturbed from rotational symmetry, the Zernike wavefront aberration coefficients describing that system can be expressed as a function of position in the focal plane using spin-weighted Zernike polynomials. Methodologies are presented to derive these polynomials to arbitrary order. This methodology is applied to aberration patterns produced by a misaligned Ritchey Chretian telescope and to distortion patterns at the focal plane of the DESI optical corrector, where it is shown to provide a more efficient description of distortion than conventional expansions.
Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2012-01-01
This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.
Lattice Boltzmann method for bosons and fermions and the fourth-order Hermite polynomial expansion.
Coelho, Rodrigo C V; Ilha, Anderson; Doria, Mauro M; Pereira, R M; Aibe, Valter Yoshihiko
2014-04-01
The Boltzmann equation with the Bhatnagar-Gross-Krook collision operator is considered for the Bose-Einstein and Fermi-Dirac equilibrium distribution functions. We show that the expansion of the microscopic velocity in terms of Hermite polynomials must be carried to the fourth order to correctly describe the energy equation. The viscosity and thermal coefficients, previously obtained by Yang et al. [Shi and Yang, J. Comput. Phys. 227, 9389 (2008); Yang and Hung, Phys. Rev. E 79, 056708 (2009)] through the Uehling-Uhlenbeck approach, are also derived here. Thus the construction of a lattice Boltzmann method for the quantum fluid is possible provided that the Bose-Einstein and Fermi-Dirac equilibrium distribution functions are expanded to fourth order in the Hermite polynomials.
Nan, Feng; Moghadasi, Mohammad; Vakili, Pirooz; Vajda, Sandor; Kozakov, Dima; Ch. Paschalidis, Ioannis
2015-01-01
We propose a new stochastic global optimization method targeting protein docking problems. The method is based on finding a general convex polynomial underestimator to the binding energy function in a permissive subspace that possesses a funnel-like structure. We use Principal Component Analysis (PCA) to determine such permissive subspaces. The problem of finding the general convex polynomial underestimator is reduced into the problem of ensuring that a certain polynomial is a Sum-of-Squares (SOS), which can be done via semi-definite programming. The underestimator is then used to bias sampling of the energy function in order to recover a deep minimum. We show that the proposed method significantly improves the quality of docked conformations compared to existing methods. PMID:25914440
Duong, Manh Hong; Han, The Anh
2016-12-01
In this paper, we study the distribution and behaviour of internal equilibria in a d-player n-strategy random evolutionary game where the game payoff matrix is generated from normal distributions. The study of this paper reveals and exploits interesting connections between evolutionary game theory and random polynomial theory. The main contributions of the paper are some qualitative and quantitative results on the expected density, [Formula: see text], and the expected number, E(n, d), of (stable) internal equilibria. Firstly, we show that in multi-player two-strategy games, they behave asymptotically as [Formula: see text] as d is sufficiently large. Secondly, we prove that they are monotone functions of d. We also make a conjecture for games with more than two strategies. Thirdly, we provide numerical simulations for our analytical results and to support the conjecture. As consequences of our analysis, some qualitative and quantitative results on the distribution of zeros of a random Bernstein polynomial are also obtained.
Smoothing optimization of supporting quadratic surfaces with Zernike polynomials
NASA Astrophysics Data System (ADS)
Zhang, Hang; Lu, Jiandong; Liu, Rui; Ma, Peifu
2018-03-01
A new optimization method to get a smooth freeform optical surface from an initial surface generated by the supporting quadratic method (SQM) is proposed. To smooth the initial surface, a 9-vertex system from the neighbor quadratic surface and the Zernike polynomials are employed to establish a linear equation system. A local optimized surface to the 9-vertex system can be build by solving the equations. Finally, a continuous smooth optimization surface is constructed by stitching the above algorithm on the whole initial surface. The spot corresponding to the optimized surface is no longer discrete pixels but a continuous distribution.
A Lagrange-type projector on the real line
NASA Astrophysics Data System (ADS)
Mastroianni, G.; Notarangelo, I.
2010-01-01
We introduce an interpolation process based on some of the zeros of the m th generalized Freud polynomial. Convergence results and error estimates are given. In particular we show that, in some important function spaces, the interpolating polynomial behaves like the best approximation. Moreover the stability and the convergence of some quadrature rules are proved.
Tsallis p, q-deformed Touchard polynomials and Stirling numbers
NASA Astrophysics Data System (ADS)
Herscovici, O.; Mansour, T.
2017-01-01
In this paper, we develop and investigate a new two-parametrized deformation of the Touchard polynomials, based on the definition of the NEXT q-exponential function of Tsallis. We obtain new generalizations of the Stirling numbers of the second kind and of the binomial coefficients and represent two new statistics for the set partitions.
Virasoro constraints and polynomial recursion for the linear Hodge integrals
NASA Astrophysics Data System (ADS)
Guo, Shuai; Wang, Gehao
2017-04-01
The Hodge tau-function is a generating function for the linear Hodge integrals. It is also a tau-function of the KP hierarchy. In this paper, we first present the Virasoro constraints for the Hodge tau-function in the explicit form of the Virasoro equations. The expression of our Virasoro constraints is simply a linear combination of the Virasoro operators, where the coefficients are restored from a power series for the Lambert W function. Then, using this result, we deduce a simple version of the Virasoro constraints for the linear Hodge partition function, where the coefficients are restored from the Gamma function. Finally, we establish the equivalence relation between the Virasoro constraints and polynomial recursion formula for the linear Hodge integrals.
2013-08-01
release; distribution unlimited. PA Number 412-TW-PA-13395 f generic function g acceleration due to gravity h altitude L aerodynamic lift force L Lagrange...cost m vehicle mass M Mach number n number of coefficients in polynomial regression p highest order of polynomial regression Q dynamic pressure R...Method (RPM); the collocation points are defined by the roots of Legendre -Gauss- Radau (LGR) functions.9 GPOPS also automatically refines the “mesh” by
1993-01-29
Bessel functions and Jacobi functions (cf. [2]). References [1] R. Askey & J. Wilson, Some basic hypergeometric orthogonal polynomials that gen- eralize...1; 1] can be treated as a part of general theory of T-systems (see [81 for that theory and [7] for some aspects of the Chebyshev polynomials theory...waves in elastic media. It has been known for some time that these multiplicities sometimes occur for topological reasons and are present generically , see
Decomposition of the polynomial kernel of arbitrary higher spin Dirac operators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eelbode, D., E-mail: David.Eelbode@ua.ac.be; Raeymaekers, T., E-mail: Tim.Raeymaekers@UGent.be; Van der Jeugt, J., E-mail: Joris.VanderJeugt@UGent.be
2015-10-15
In a series of recent papers, we have introduced higher spin Dirac operators, which are generalisations of the classical Dirac operator. Whereas the latter acts on spinor-valued functions, the former acts on functions taking values in arbitrary irreducible half-integer highest weight representations for the spin group. In this paper, we describe how the polynomial kernel spaces of such operators decompose in irreducible representations of the spin group. We will hereby make use of results from representation theory.
Discrimination Power of Polynomial-Based Descriptors for Graphs by Using Functional Matrices.
Dehmer, Matthias; Emmert-Streib, Frank; Shi, Yongtang; Stefu, Monica; Tripathi, Shailesh
2015-01-01
In this paper, we study the discrimination power of graph measures that are based on graph-theoretical matrices. The paper generalizes the work of [M. Dehmer, M. Moosbrugger. Y. Shi, Encoding structural information uniquely with polynomial-based descriptors by employing the Randić matrix, Applied Mathematics and Computation, 268(2015), 164-168]. We demonstrate that by using the new functional matrix approach, exhaustively generated graphs can be discriminated more uniquely than shown in the mentioned previous work.
Discrimination Power of Polynomial-Based Descriptors for Graphs by Using Functional Matrices
Dehmer, Matthias; Emmert-Streib, Frank; Shi, Yongtang; Stefu, Monica; Tripathi, Shailesh
2015-01-01
In this paper, we study the discrimination power of graph measures that are based on graph-theoretical matrices. The paper generalizes the work of [M. Dehmer, M. Moosbrugger. Y. Shi, Encoding structural information uniquely with polynomial-based descriptors by employing the Randić matrix, Applied Mathematics and Computation, 268(2015), 164–168]. We demonstrate that by using the new functional matrix approach, exhaustively generated graphs can be discriminated more uniquely than shown in the mentioned previous work. PMID:26479495
SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.
2016-09-01
A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less
SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos
NASA Astrophysics Data System (ADS)
Ahlfeld, R.; Belkouchi, B.; Montomoli, F.
2016-09-01
A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.
NASA Technical Reports Server (NTRS)
Zhang, Zhimin; Tomlinson, John; Martin, Clyde
1994-01-01
In this work, the relationship between splines and the control theory has been analyzed. We show that spline functions can be constructed naturally from the control theory. By establishing a framework based on control theory, we provide a simple and systematic way to construct splines. We have constructed the traditional spline functions including the polynomial splines and the classical exponential spline. We have also discovered some new spline functions such as trigonometric splines and the combination of polynomial, exponential and trigonometric splines. The method proposed in this paper is easy to implement. Some numerical experiments are performed to investigate properties of different spline approximations.
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony J.; Munoz, Cesar A.
2014-01-01
Sturm's Theorem is a well-known result in real algebraic geometry that provides a function that computes the number of roots of a univariate polynomial in a semiopen interval. This paper presents a formalization of this theorem in the PVS theorem prover, as well as a decision procedure that checks whether a polynomial is always positive, nonnegative, nonzero, negative, or nonpositive on any input interval. The soundness and completeness of the decision procedure is proven in PVS. The procedure and its correctness properties enable the implementation of a PVS strategy for automatically proving existential and universal univariate polynomial inequalities. Since the decision procedure is formally verified in PVS, the soundness of the strategy depends solely on the internal logic of PVS rather than on an external oracle. The procedure itself uses a combination of Sturm's Theorem, an interval bisection procedure, and the fact that a polynomial with exactly one root in a bounded interval is always nonnegative on that interval if and only if it is nonnegative at both endpoints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewsuk, Kevin Gregory; Arguello, Jose Guadalupe, Jr.; Reiterer, Markus W.
2006-02-01
The ease and ability to predict sintering shrinkage and densification with the Skorohod-Olevsky viscous sintering (SOVS) model within a finite-element (FE) code have been improved with the use of an Arrhenius-type viscosity function. The need for a better viscosity function was identified by evaluating SOVS model predictions made using a previously published polynomial viscosity function. Predictions made using the original, polynomial viscosity function do not accurately reflect experimentally observed sintering behavior. To more easily and better predict sintering behavior using FE simulations, a thermally activated viscosity function based on creep theory was used with the SOVS model. In comparison withmore » the polynomial viscosity function, SOVS model predictions made using the Arrhenius-type viscosity function are more representative of experimentally observed viscosity and sintering behavior. Additionally, the effects of changes in heating rate on densification can easily be predicted with the Arrhenius-type viscosity function. Another attribute of the Arrhenius-type viscosity function is that it provides the potential to link different sintering models. For example, the apparent activation energy, Q, for densification used in the construction of the master sintering curve for a low-temperature cofire ceramic dielectric has been used as the apparent activation energy for material flow in the Arrhenius-type viscosity function to predict heating rate-dependent sintering behavior using the SOVS model.« less
NASA Astrophysics Data System (ADS)
Mandal, Sudhansu S.; Mukherjee, Sutirtha; Ray, Koushik
2018-03-01
A method for determining the ground state of a planar interacting many-electron system in a magnetic field perpendicular to the plane is described. The ground state wave-function is expressed as a linear combination of a set of basis functions. Given only the flux and the number of electrons describing an incompressible state, we use the combinatorics of partitioning the flux among the electrons to derive the basis wave-functions as linear combinations of Schur polynomials. The procedure ensures that the basis wave-functions form representations of the angular momentum algebra. We exemplify the method by deriving the basis functions for the 5/2 quantum Hall state with a few particles. We find that one of the basis functions is precisely the Moore-Read Pfaffian wave function.
Comparison Between Polynomial, Euler Beta-Function and Expo-Rational B-Spline Bases
NASA Astrophysics Data System (ADS)
Kristoffersen, Arnt R.; Dechevsky, Lubomir T.; Laksa˚, Arne; Bang, Børre
2011-12-01
Euler Beta-function B-splines (BFBS) are the practically most important instance of generalized expo-rational B-splines (GERBS) which are not true expo-rational B-splines (ERBS). BFBS do not enjoy the full range of the superproperties of ERBS but, while ERBS are special functions computable by a very rapidly converging yet approximate numerical quadrature algorithms, BFBS are explicitly computable piecewise polynomial (for integer multiplicities), similar to classical Schoenberg B-splines. In the present communication we define, compute and visualize for the first time all possible BFBS of degree up to 3 which provide Hermite interpolation in three consecutive knots of multiplicity up to 3, i.e., the function is being interpolated together with its derivatives of order up to 2. We compare the BFBS obtained for different degrees and multiplicities among themselves and versus the classical Schoenberg polynomial B-splines and the true ERBS for the considered knots. The results of the graphical comparison are discussed from analytical point of view. For the numerical computation and visualization of the new B-splines we have used Maple 12.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jun; Jiang, Bin; Guo, Hua, E-mail: hguo@unm.edu
2013-11-28
A rigorous, general, and simple method to fit global and permutation invariant potential energy surfaces (PESs) using neural networks (NNs) is discussed. This so-called permutation invariant polynomial neural network (PIP-NN) method imposes permutation symmetry by using in its input a set of symmetry functions based on PIPs. For systems with more than three atoms, it is shown that the number of symmetry functions in the input vector needs to be larger than the number of internal coordinates in order to include both the primary and secondary invariant polynomials. This PIP-NN method is successfully demonstrated in three atom-triatomic reactive systems, resultingmore » in full-dimensional global PESs with average errors on the order of meV. These PESs are used in full-dimensional quantum dynamical calculations.« less
An Efficient Spectral Method for Ordinary Differential Equations with Rational Function Coefficients
NASA Technical Reports Server (NTRS)
Coutsias, Evangelos A.; Torres, David; Hagstrom, Thomas
1994-01-01
We present some relations that allow the efficient approximate inversion of linear differential operators with rational function coefficients. We employ expansions in terms of a large class of orthogonal polynomial families, including all the classical orthogonal polynomials. These families obey a simple three-term recurrence relation for differentiation, which implies that on an appropriately restricted domain the differentiation operator has a unique banded inverse. The inverse is an integration operator for the family, and it is simply the tridiagonal coefficient matrix for the recurrence. Since in these families convolution operators (i.e. matrix representations of multiplication by a function) are banded for polynomials, we are able to obtain a banded representation for linear differential operators with rational coefficients. This leads to a method of solution of initial or boundary value problems that, besides having an operation count that scales linearly with the order of truncation N, is computationally well conditioned. Among the applications considered is the use of rational maps for the resolution of sharp interior layers.
Exploiting structure: Introduction and motivation
NASA Technical Reports Server (NTRS)
Xu, Zhong Ling
1993-01-01
Research activities performed during the period of 29 June 1993 through 31 Aug. 1993 are summarized. The Robust Stability of Systems where transfer function or characteristic polynomial are multilinear affine functions of parameters of interest in two directions, Algorithmic and Theoretical, was developed. In the algorithmic direction, a new approach that reduces the computational burden of checking the robust stability of the system with multilinear uncertainty is found. This technique is called 'Stability by linear process.' In fact, the 'Stability by linear process' described gives an algorithm. In analysis, we obtained a robustness criterion for the family of polynomials with coefficients of multilinear affine function in the coefficient space and obtained the result for the robust stability of diamond families of polynomials with complex coefficients also. We obtained the limited results for SPR design and we provide a framework for solving ACS. Finally, copies of the outline of our results are provided in the appendix. Also, there is an administration issue in the appendix.
NASA Astrophysics Data System (ADS)
Kent, Stephen M.
2018-04-01
If the optical system of a telescope is perturbed from rotational symmetry, the Zernike wavefront aberration coefficients describing that system can be expressed as a function of position in the focal plane using spin-weighted Zernike polynomials. Methodologies are presented to derive these polynomials to arbitrary order. This methodology is applied to aberration patterns produced by a misaligned Ritchey–Chrétien telescope and to distortion patterns at the focal plane of the DESI optical corrector, where it is shown to provide a more efficient description of distortion than conventional expansions.
Fitness Probability Distribution of Bit-Flip Mutation.
Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique
2015-01-01
Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.
High degree interpolation polynomial in Newton form
NASA Technical Reports Server (NTRS)
Tal-Ezer, Hillel
1988-01-01
Polynomial interpolation is an essential subject in numerical analysis. Dealing with a real interval, it is well known that even if f(x) is an analytic function, interpolating at equally spaced points can diverge. On the other hand, interpolating at the zeroes of the corresponding Chebyshev polynomial will converge. Using the Newton formula, this result of convergence is true only on the theoretical level. It is shown that the algorithm which computes the divided differences is numerically stable only if: (1) the interpolating points are arranged in a different order, and (2) the size of the interval is 4.
Student Support for Research in Hierarchical Control and Trajectory Planning
NASA Technical Reports Server (NTRS)
Martin, Clyde F.
1999-01-01
Generally, classical polynomial splines tend to exhibit unwanted undulations. In this work, we discuss a technique, based on control principles, for eliminating these undulations and increasing the smoothness properties of the spline interpolants. We give a generalization of the classical polynomial splines and show that this generalization is, in fact, a family of splines that covers the broad spectrum of polynomial, trigonometric and exponential splines. A particular element in this family is determined by the appropriate control data. It is shown that this technique is easy to implement. Several numerical and curve-fitting examples are given to illustrate the advantages of this technique over the classical approach. Finally, we discuss the convergence properties of the interpolant.
NASA Astrophysics Data System (ADS)
Yañez-Navarro, G.; Sun, Guo-Hua; Sun, Dong-Sheng; Chen, Chang-Yuan; Dong, Shi-Hai
2017-08-01
A few important integrals involving the product of two universal associated Legendre polynomials {P}{l\\prime}{m\\prime}(x), {P}{k\\prime}{n\\prime}(x) and x2a(1 - x2)-p-1, xb(1 ± x)-p-1 and xc(1 -x2)-p-1 (1 ± x) are evaluated using the operator form of Taylor’s theorem and an integral over a single universal associated Legendre polynomial. These integrals are more general since the quantum numbers are unequal, i.e. l‧ ≠ k‧ and m‧ ≠ n‧. Their selection rules are also given. We also verify the correctness of those integral formulas numerically. Supported by 20170938-SIP-IPN, Mexico
1991-03-01
Target Temperature as a Function of the Py erot Temperature ........ .... ............. 13 2.4 Emitter Temperature as a Functio of th Liode Target...Temperatm .. ........................ 14 2.5 Experimental Calibration Data and Polynomial Fit for ASTAR-811C Diode ... . ......... . ...... 18 2.6 Actual...12.2152(V) - 0.0099 (5.2) Maximum error a 0.0093% C) TR = 420 K P = 4.5541(V)3 - 23.58 18 (V)2 + 18.1602(V) + 0.002 (5.3) Maximum error =.1.632% d) TR = 450
Hermite-Birkhoff interpolation in the nth roots of unity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavaretta, A.S. Jr.; Sharma, A.; Varga, R.S.
1980-06-01
Consider, as nodes for polynomial interpolation, the nth roots of unity. For a sufficiently smooth function f(z), we require a polynomial p(z) to interpolate f and certain of its derivatives at each node. It is shown that the so-called Polya conditions, which are necessary for unique interpolation, are in this setting also sufficient.
Aregay, Mehreteab; Shkedy, Ziv; Molenberghs, Geert; David, Marie-Pierre; Tibaldi, Fabián
2013-01-01
In infectious diseases, it is important to predict the long-term persistence of vaccine-induced antibodies and to estimate the time points where the individual titers are below the threshold value for protection. This article focuses on HPV-16/18, and uses a so-called fractional-polynomial model to this effect, derived in a data-driven fashion. Initially, model selection was done from among the second- and first-order fractional polynomials on the one hand and from the linear mixed model on the other. According to a functional selection procedure, the first-order fractional polynomial was selected. Apart from the fractional polynomial model, we also fitted a power-law model, which is a special case of the fractional polynomial model. Both models were compared using Akaike's information criterion. Over the observation period, the fractional polynomials fitted the data better than the power-law model; this, of course, does not imply that it fits best over the long run, and hence, caution ought to be used when prediction is of interest. Therefore, we point out that the persistence of the anti-HPV responses induced by these vaccines can only be ascertained empirically by long-term follow-up analysis.
Distribution functions of probabilistic automata
NASA Technical Reports Server (NTRS)
Vatan, F.
2001-01-01
Each probabilistic automaton M over an alphabet A defines a probability measure Prob sub(M) on the set of all finite and infinite words over A. We can identify a k letter alphabet A with the set {0, 1,..., k-1}, and, hence, we can consider every finite or infinite word w over A as a radix k expansion of a real number X(w) in the interval [0, 1]. This makes X(w) a random variable and the distribution function of M is defined as usual: F(x) := Prob sub(M) { w: X(w) < x }. Utilizing the fixed-point semantics (denotational semantics), extended to probabilistic computations, we investigate the distribution functions of probabilistic automata in detail. Automata with continuous distribution functions are characterized. By a new, and much more easier method, it is shown that the distribution function F(x) is an analytic function if it is a polynomial. Finally, answering a question posed by D. Knuth and A. Yao, we show that a polynomial distribution function F(x) on [0, 1] can be generated by a prob abilistic automaton iff all the roots of F'(x) = 0 in this interval, if any, are rational numbers. For this, we define two dynamical systems on the set of polynomial distributions and study attracting fixed points of random composition of these two systems.
Explaining Support Vector Machines: A Color Based Nomogram
Van Belle, Vanya; Van Calster, Ben; Van Huffel, Sabine; Suykens, Johan A. K.; Lisboa, Paulo
2016-01-01
Problem setting Support vector machines (SVMs) are very popular tools for classification, regression and other problems. Due to the large choice of kernels they can be applied with, a large variety of data can be analysed using these tools. Machine learning thanks its popularity to the good performance of the resulting models. However, interpreting the models is far from obvious, especially when non-linear kernels are used. Hence, the methods are used as black boxes. As a consequence, the use of SVMs is less supported in areas where interpretability is important and where people are held responsible for the decisions made by models. Objective In this work, we investigate whether SVMs using linear, polynomial and RBF kernels can be explained such that interpretations for model-based decisions can be provided. We further indicate when SVMs can be explained and in which situations interpretation of SVMs is (hitherto) not possible. Here, explainability is defined as the ability to produce the final decision based on a sum of contributions which depend on one single or at most two input variables. Results Our experiments on simulated and real-life data show that explainability of an SVM depends on the chosen parameter values (degree of polynomial kernel, width of RBF kernel and regularization constant). When several combinations of parameter values yield the same cross-validation performance, combinations with a lower polynomial degree or a larger kernel width have a higher chance of being explainable. Conclusions This work summarizes SVM classifiers obtained with linear, polynomial and RBF kernels in a single plot. Linear and polynomial kernels up to the second degree are represented exactly. For other kernels an indication of the reliability of the approximation is presented. The complete methodology is available as an R package and two apps and a movie are provided to illustrate the possibilities offered by the method. PMID:27723811
Flat bases of invariant polynomials and P-matrices of E{sub 7} and E{sub 8}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamini, Vittorino
2010-02-15
Let G be a compact group of linear transformations of a Euclidean space V. The G-invariant C{sup {infinity}} functions can be expressed as C{sup {infinity}} functions of a finite basic set of G-invariant homogeneous polynomials, sometimes called an integrity basis. The mathematical description of the orbit space V/G depends on the integrity basis too: it is realized through polynomial equations and inequalities expressing rank and positive semidefiniteness conditions of the P-matrix, a real symmetric matrix determined by the integrity basis. The choice of the basic set of G-invariant homogeneous polynomials forming an integrity basis is not unique, so it ismore » not unique the mathematical description of the orbit space too. If G is an irreducible finite reflection group, Saito et al. [Commun. Algebra 8, 373 (1980)] characterized some special basic sets of G-invariant homogeneous polynomials that they called flat. They also found explicitly the flat basic sets of invariant homogeneous polynomials of all the irreducible finite reflection groups except of the two largest groups E{sub 7} and E{sub 8}. In this paper the flat basic sets of invariant homogeneous polynomials of E{sub 7} and E{sub 8} and the corresponding P-matrices are determined explicitly. Using the results here reported one is able to determine easily the P-matrices corresponding to any other integrity basis of E{sub 7} or E{sub 8}. From the P-matrices one may then write down the equations and inequalities defining the orbit spaces of E{sub 7} and E{sub 8} relatively to a flat basis or to any other integrity basis. The results here obtained may be employed concretely to study analytically the symmetry breaking in all theories where the symmetry group is one of the finite reflection groups E{sub 7} and E{sub 8} or one of the Lie groups E{sub 7} and E{sub 8} in their adjoint representations.« less
NASA Astrophysics Data System (ADS)
Bożejko, Marek; Lytvynov, Eugene
2011-03-01
Let T be an underlying space with a non-atomic measure σ on it. In [ Comm. Math. Phys. 292, 99-129 (2009)] the Meixner class of non-commutative generalized stochastic processes with freely independent values, {ω=(ω(t))_{tin T}} , was characterized through the continuity of the corresponding orthogonal polynomials. In this paper, we derive a generating function for these orthogonal polynomials. The first question we have to answer is: What should serve as a generating function for a system of polynomials of infinitely many non-commuting variables? We construct a class of operator-valued functions {Z=(Z(t))_{tin T}} such that Z( t) commutes with ω( s) for any {s,tin T}. Then a generating function can be understood as {G(Z,ω)=sum_{n=0}^infty int_{T^n}P^{(n)}(ω(t_1),dots,ω(t_n))Z(t_1)dots Z(t_n)} {σ(dt_1) dots σ(dt_n)} , where {P^{(n)}(ω(t_1),dots,ω(t_n))} is (the kernel of the) n th orthogonal polynomial. We derive an explicit form of G( Z, ω), which has a resolvent form and resembles the generating function in the classical case, albeit it involves integrals of non-commuting operators. We finally discuss a related problem of the action of the annihilation operators {partial_t,t in T} . In contrast to the classical case, we prove that the operators ∂ t related to the free Gaussian and Poisson processes have a property of globality. This result is genuinely infinite-dimensional, since in one dimension one loses the notion of globality.
Staircase tableaux, the asymmetric exclusion process, and Askey-Wilson polynomials
Corteel, Sylvie; Williams, Lauren K.
2010-01-01
We introduce some combinatorial objects called staircase tableaux, which have cardinality 4nn !, and connect them to both the asymmetric exclusion process (ASEP) and Askey-Wilson polynomials. The ASEP is a model from statistical mechanics introduced in the late 1960s, which describes a system of interacting particles hopping left and right on a one-dimensional lattice of n sites with open boundaries. It has been cited as a model for traffic flow and translation in protein synthesis. In its most general form, particles may enter and exit at the left with probabilities α and γ, and they may exit and enter at the right with probabilities β and δ. In the bulk, the probability of hopping left is q times the probability of hopping right. Our first result is a formula for the stationary distribution of the ASEP with all parameters general, in terms of staircase tableaux. Our second result is a formula for the moments of (the weight function of) Askey-Wilson polynomials, also in terms of staircase tableaux. Since the 1980s there has been a great deal of work giving combinatorial formulas for moments of classical orthogonal polynomials (e.g. Hermite, Charlier, Laguerre); among these polynomials, the Askey-Wilson polynomials are the most important, because they are at the top of the hierarchy of classical orthogonal polynomials. PMID:20348417
Staircase tableaux, the asymmetric exclusion process, and Askey-Wilson polynomials.
Corteel, Sylvie; Williams, Lauren K
2010-04-13
We introduce some combinatorial objects called staircase tableaux, which have cardinality 4(n)n!, and connect them to both the asymmetric exclusion process (ASEP) and Askey-Wilson polynomials. The ASEP is a model from statistical mechanics introduced in the late 1960s, which describes a system of interacting particles hopping left and right on a one-dimensional lattice of n sites with open boundaries. It has been cited as a model for traffic flow and translation in protein synthesis. In its most general form, particles may enter and exit at the left with probabilities alpha and gamma, and they may exit and enter at the right with probabilities beta and delta. In the bulk, the probability of hopping left is q times the probability of hopping right. Our first result is a formula for the stationary distribution of the ASEP with all parameters general, in terms of staircase tableaux. Our second result is a formula for the moments of (the weight function of) Askey-Wilson polynomials, also in terms of staircase tableaux. Since the 1980s there has been a great deal of work giving combinatorial formulas for moments of classical orthogonal polynomials (e.g. Hermite, Charlier, Laguerre); among these polynomials, the Askey-Wilson polynomials are the most important, because they are at the top of the hierarchy of classical orthogonal polynomials.
Orthonormal aberration polynomials for anamorphic optical imaging systems with rectangular pupils.
Mahajan, Virendra N
2010-12-20
The classical aberrations of an anamorphic optical imaging system, representing the terms of a power-series expansion of its aberration function, are separable in the Cartesian coordinates of a point on its pupil. We discuss the balancing of a classical aberration of a certain order with one or more such aberrations of lower order to minimize its variance across a rectangular pupil of such a system. We show that the balanced aberrations are the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point. The compound Legendre polynomials are orthogonal across a rectangular pupil and, like the classical aberrations, are inherently separable in the Cartesian coordinates of the pupil point. They are different from the balanced aberrations and the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil.
Symmetries and Invariants of Twisted Quantum Algebras and Associated Poisson Algebras
NASA Astrophysics Data System (ADS)
Molev, A. I.; Ragoucy, E.
We construct an action of the braid group BN on the twisted quantized enveloping algebra U q'( {o}N) where the elements of BN act as automorphisms. In the classical limit q → 1, we recover the action of BN on the polynomial functions on the space of upper triangular matrices with ones on the diagonal. The action preserves the Poisson bracket on the space of polynomials which was introduced by Nelson and Regge in their study of quantum gravity and rediscovered in the mathematical literature. Furthermore, we construct a Poisson bracket on the space of polynomials associated with another twisted quantized enveloping algebra U q'( {sp}2n). We use the Casimir elements of both twisted quantized enveloping algebras to reproduce and construct some well-known and new polynomial invariants of the corresponding Poisson algebras.
NASA Technical Reports Server (NTRS)
Geddes, K. O.
1977-01-01
If a linear ordinary differential equation with polynomial coefficients is converted into integrated form then the formal substitution of a Chebyshev series leads to recurrence equations defining the Chebyshev coefficients of the solution function. An explicit formula is presented for the polynomial coefficients of the integrated form in terms of the polynomial coefficients of the differential form. The symmetries arising from multiplication and integration of Chebyshev polynomials are exploited in deriving a general recurrence equation from which can be derived all of the linear equations defining the Chebyshev coefficients. Procedures for deriving the general recurrence equation are specified in a precise algorithmic notation suitable for translation into any of the languages for symbolic computation. The method is algebraic and it can therefore be applied to differential equations containing indeterminates.
On adaptive weighted polynomial preconditioning for Hermitian positive definite matrices
NASA Technical Reports Server (NTRS)
Fischer, Bernd; Freund, Roland W.
1992-01-01
The conjugate gradient algorithm for solving Hermitian positive definite linear systems is usually combined with preconditioning in order to speed up convergence. In recent years, there has been a revival of polynomial preconditioning, motivated by the attractive features of the method on modern architectures. Standard techniques for choosing the preconditioning polynomial are based only on bounds for the extreme eigenvalues. Here a different approach is proposed, which aims at adapting the preconditioner to the eigenvalue distribution of the coefficient matrix. The technique is based on the observation that good estimates for the eigenvalue distribution can be derived after only a few steps of the Lanczos process. This information is then used to construct a weight function for a suitable Chebyshev approximation problem. The solution of this problem yields the polynomial preconditioner. In particular, we investigate the use of Bernstein-Szego weights.
NASA Astrophysics Data System (ADS)
Petković, Dalibor; Shamshirband, Shahaboddin; Saboohi, Hadi; Ang, Tan Fong; Anuar, Nor Badrul; Rahman, Zulkanain Abdul; Pavlović, Nenad T.
2014-07-01
The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the polynomial and radial basis function (RBF) are applied as the kernel function of Support Vector Regression (SVR) to estimate and predict estimate MTF value of the actual optical system according to experimental tests. Instead of minimizing the observed training error, SVR_poly and SVR_rbf attempt to minimize the generalization error bound so as to achieve generalized performance. The experimental results show that an improvement in predictive accuracy and capability of generalization can be achieved by the SVR_rbf approach in compare to SVR_poly soft computing methodology.
Georeferencing CAMS data: Polynomial rectification and beyond
NASA Astrophysics Data System (ADS)
Yang, Xinghe
The Calibrated Airborne Multispectral Scanner (CAMS) is a sensor used in the commercial remote sensing program at NASA Stennis Space Center. In geographic applications of the CAMS data, accurate geometric rectification is essential for the analysis of the remotely sensed data and for the integration of the data into Geographic Information Systems (GIS). The commonly used rectification techniques such as the polynomial transformation and ortho rectification have been very successful in the field of remote sensing and GIS for most remote sensing data such as Landsat imagery, SPOT imagery and aerial photos. However, due to the geometric nature of the airborne line scanner which has high spatial frequency distortions, the polynomial model and the ortho rectification technique in current commercial software packages such as Erdas Imagine are not adequate for obtaining sufficient geometric accuracy. In this research, the geometric nature, especially the major distortions, of the CAMS data has been described. An analytical step-by-step geometric preprocessing has been utilized to deal with the potential high frequency distortions of the CAMS data. A generic sensor-independent photogrammetric model has been developed for the ortho-rectification of the CAMS data. Three generalized kernel classes and directional elliptical basis have been formulated into a rectification model of summation of multisurface functions, which is a significant extension to the traditional radial basis functions. The preprocessing mechanism has been fully incorporated into the polynomial, the triangle-based finite element analysis as well as the summation of multisurface functions. While the multisurface functions and the finite element analysis have the characteristics of localization, piecewise logic has been applied to the polynomial and photogrammetric methods, which can produce significant accuracy improvement over the global approach. A software module has been implemented with full integration of data preprocessing and rectification techniques under Erdas Imagine development environment. The final root mean square (RMS) errors for the test CAMS data are about two pixels which are compatible with the random RMS errors existed in the reference map coordinates.
Lee, Y.-G.; Zou, W.-N.; Pan, E.
2015-01-01
This paper presents a closed-form solution for the arbitrary polygonal inclusion problem with polynomial eigenstrains of arbitrary order in an anisotropic magneto-electro-elastic full plane. The additional displacements or eigendisplacements, instead of the eigenstrains, are assumed to be a polynomial with general terms of order M+N. By virtue of the extended Stroh formulism, the induced fields are expressed in terms of a group of basic functions which involve boundary integrals of the inclusion domain. For the special case of polygonal inclusions, the boundary integrals are carried out explicitly, and their averages over the inclusion are also obtained. The induced fields under quadratic eigenstrains are mostly analysed in terms of figures and tables, as well as those under the linear and cubic eigenstrains. The connection between the present solution and the solution via the Green's function method is established and numerically verified. The singularity at the vertices of the arbitrary polygon is further analysed via the basic functions. The general solution and the numerical results for the constant, linear, quadratic and cubic eigenstrains presented in this paper enable us to investigate the features of the inclusion and inhomogeneity problem concerning polynomial eigenstrains in semiconductors and advanced composites, while the results can further serve as benchmarks for future analyses of Eshelby's inclusion problem. PMID:26345141
Introduction to Real Orthogonal Polynomials
1992-06-01
uses Green’s functions. As motivation , consider the Dirichlet problem for the unit circle in the plane, which involves finding a harmonic function u(r...xv ; a, b ; q) - TO [q-N ab+’q ; q, xq b. Orthogoy RMotion O0 (bq :q)x p.(q* ; a, b ; q) pg(q’ ; a, b ; q) (q "q), (aq)x (q ; q), (I -abq) (bq ; q... motivation and justi- fication for continued study of the intrinsic structure of orthogonal polynomials. 99 LIST OF REFERENCES 1. Deyer, W. M., ed., CRC
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1993-01-01
The investigation of overcoming Gibbs phenomenon was continued, i.e., obtaining exponential accuracy at all points including at the discontinuities themselves, from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. It was shown that if we are given the first N expansion coefficients of an L(sub 2) function f(x) in terms of either the trigonometrical polynomials or the Chebyshev or Legendre polynomials, an exponentially convergent approximation to the point values of f(x) in any sub-interval in which it is analytic can be constructed.
Lefebvre, J E; Zhang, V; Gazalet, J; Gryba, T; Sadaune, V
2001-09-01
The propagation of guided waves in continuous functionally graded plates is studied by using Legendre polynomials. Dispersion curves, and power and field profiles are easily obtained. Our computer program is validated by comparing our results against other calculations from the literature. Numerical results are also given for a graded semiconductor plate. It is felt that the present method could be of quite practical interest in waveguiding engineering, non-destructive testing of functionally graded materials (FGMs) to identify the best inspection strategies, or by means of a numerical inversion algorithm to determine through-thickness gradients in material parameters.
Quadrature formula for evaluating left bounded Hadamard type hypersingular integrals
NASA Astrophysics Data System (ADS)
Bichi, Sirajo Lawan; Eshkuvatov, Z. K.; Nik Long, N. M. A.; Okhunov, Abdurahim
2014-12-01
Left semi-bounded Hadamard type Hypersingular integral (HSI) of the form H(h,x) = 1/π √{1+x/1-x }
NASA Technical Reports Server (NTRS)
Anuta, P. E.
1975-01-01
Least squares approximation techniques were developed for use in computer aided correction of spatial image distortions for registration of multitemporal remote sensor imagery. Polynomials were first used to define image distortion over the entire two dimensional image space. Spline functions were then investigated to determine if the combination of lower order polynomials could approximate a higher order distortion with less computational difficulty. Algorithms for generating approximating functions were developed and applied to the description of image distortion in aircraft multispectral scanner imagery. Other applications of the techniques were suggested for earth resources data processing areas other than geometric distortion representation.
ERIC Educational Resources Information Center
Shin, Tacksoo
2012-01-01
This study introduced various nonlinear growth models, including the quadratic conventional polynomial model, the fractional polynomial model, the Sigmoid model, the growth model with negative exponential functions, the multidimensional scaling technique, and the unstructured growth curve model. It investigated which growth models effectively…
Two-dimensional orthonormal trend surfaces for prospecting
NASA Astrophysics Data System (ADS)
Sarma, D. D.; Selvaraj, J. B.
Orthonormal polynomials have distinct advantages over conventional polynomials: the equations for evaluating trend coefficients are not ill-conditioned and the convergence power of this method is greater compared to the least-squares approximation and therefore the approach by orthonormal functions provides a powerful alternative to the least-squares method. In this paper, orthonormal polynomials in two dimensions are obtained using the Gram-Schmidt method for a polynomial series of the type: Z = 1 + x + y + x2 + xy + y2 + … + yn, where x and y are the locational coordinates and Z is the value of the variable under consideration. Trend-surface analysis, which has wide applications in prospecting, has been carried out using the orthonormal polynomial approach for two sample sets of data from India concerned with gold accumulation from the Kolar Gold Field, and gravity data. A comparison of the orthonormal polynomial trend surfaces with those obtained by the classical least-squares method has been made for the two data sets. In both the situations, the orthonormal polynomial surfaces gave an improved fit to the data. A flowchart and a FORTRAN-IV computer program for deriving orthonormal polynomials of any order and for using them to fit trend surfaces is included. The program has provision for logarithmic transformation of the Z variable. If log-transformation is performed the predicted Z values are reconverted to the original units and the trend-surface map generated for use. The illustration of gold assay data related to the Champion lode system of Kolar Gold Fields, for which a 9th-degree orthonormal trend surface was fit, could be used for further prospecting the area.
Random complex dynamics and devil's coliseums
NASA Astrophysics Data System (ADS)
Sumi, Hiroki
2015-04-01
We investigate the random dynamics of polynomial maps on the Riemann sphere \\hat{\\Bbb{C}} and the dynamics of semigroups of polynomial maps on \\hat{\\Bbb{C}} . In particular, the dynamics of a semigroup G of polynomials whose planar postcritical set is bounded and the associated random dynamics are studied. In general, the Julia set of such a G may be disconnected. We show that if G is such a semigroup, then regarding the associated random dynamics, the chaos of the averaged system disappears in the C0 sense, and the function T∞ of probability of tending to ∞ \\in \\hat{\\Bbb{C}} is Hölder continuous on \\hat{\\Bbb{C}} and varies only on the Julia set of G. Moreover, the function T∞ has a kind of monotonicity. It turns out that T∞ is a complex analogue of the devil's staircase, and we call T∞ a ‘devil’s coliseum'. We investigate the details of T∞ when G is generated by two polynomials. In this case, T∞ varies precisely on the Julia set of G, which is a thin fractal set. Moreover, under this condition, we investigate the pointwise Hölder exponents of T∞.
Huang, Wei; Oh, Sung-Kwun; Pedrycz, Witold
2014-12-01
In this study, we propose Hybrid Radial Basis Function Neural Networks (HRBFNNs) realized with the aid of fuzzy clustering method (Fuzzy C-Means, FCM) and polynomial neural networks. Fuzzy clustering used to form information granulation is employed to overcome a possible curse of dimensionality, while the polynomial neural network is utilized to build local models. Furthermore, genetic algorithm (GA) is exploited here to optimize the essential design parameters of the model (including fuzzification coefficient, the number of input polynomial fuzzy neurons (PFNs), and a collection of the specific subset of input PFNs) of the network. To reduce dimensionality of the input space, principal component analysis (PCA) is considered as a sound preprocessing vehicle. The performance of the HRBFNNs is quantified through a series of experiments, in which we use several modeling benchmarks of different levels of complexity (different number of input variables and the number of available data). A comparative analysis reveals that the proposed HRBFNNs exhibit higher accuracy in comparison to the accuracy produced by some models reported previously in the literature. Copyright © 2014 Elsevier Ltd. All rights reserved.
Optical Imaging and Radiometric Modeling and Simulation
NASA Technical Reports Server (NTRS)
Ha, Kong Q.; Fitzmaurice, Michael W.; Moiser, Gary E.; Howard, Joseph M.; Le, Chi M.
2010-01-01
OPTOOL software is a general-purpose optical systems analysis tool that was developed to offer a solution to problems associated with computational programs written for the James Webb Space Telescope optical system. It integrates existing routines into coherent processes, and provides a structure with reusable capabilities that allow additional processes to be quickly developed and integrated. It has an extensive graphical user interface, which makes the tool more intuitive and friendly. OPTOOL is implemented using MATLAB with a Fourier optics-based approach for point spread function (PSF) calculations. It features parametric and Monte Carlo simulation capabilities, and uses a direct integration calculation to permit high spatial sampling of the PSF. Exit pupil optical path difference (OPD) maps can be generated using combinations of Zernike polynomials or shaped power spectral densities. The graphical user interface allows rapid creation of arbitrary pupil geometries, and entry of all other modeling parameters to support basic imaging and radiometric analyses. OPTOOL provides the capability to generate wavefront-error (WFE) maps for arbitrary grid sizes. These maps are 2D arrays containing digital sampled versions of functions ranging from Zernike polynomials to combination of sinusoidal wave functions in 2D, to functions generated from a spatial frequency power spectral distribution (PSD). It also can generate optical transfer functions (OTFs), which are incorporated into the PSF calculation. The user can specify radiometrics for the target and sky background, and key performance parameters for the instrument s focal plane array (FPA). This radiometric and detector model setup is fairly extensive, and includes parameters such as zodiacal background, thermal emission noise, read noise, and dark current. The setup also includes target spectral energy distribution as a function of wavelength for polychromatic sources, detector pixel size, and the FPA s charge diffusion modulation transfer function (MTF).
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.
Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C
2016-01-01
We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range.
Statistically generated weighted curve fit of residual functions for modal analysis of structures
NASA Technical Reports Server (NTRS)
Bookout, P. S.
1995-01-01
A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.
Error estimates of Lagrange interpolation and orthonormal expansions for Freud weights
NASA Astrophysics Data System (ADS)
Kwon, K. H.; Lee, D. W.
2001-08-01
Let Sn[f] be the nth partial sum of the orthonormal polynomials expansion with respect to a Freud weight. Then we obtain sufficient conditions for the boundedness of Sn[f] and discuss the speed of the convergence of Sn[f] in weighted Lp space. We also find sufficient conditions for the boundedness of the Lagrange interpolation polynomial Ln[f], whose nodal points are the zeros of orthonormal polynomials with respect to a Freud weight. In particular, if W(x)=e-(1/2)x2 is the Hermite weight function, then we obtain sufficient conditions for the inequalities to hold:andwhere and k=0,1,2...,r.
Closed-form estimates of the domain of attraction for nonlinear systems via fuzzy-polynomial models.
Pitarch, José Luis; Sala, Antonio; Ariño, Carlos Vicente
2014-04-01
In this paper, the domain of attraction of the origin of a nonlinear system is estimated in closed form via level sets with polynomial boundaries, iteratively computed. In particular, the domain of attraction is expanded from a previous estimate, such as a classical Lyapunov level set. With the use of fuzzy-polynomial models, the domain of attraction analysis can be carried out via sum of squares optimization and an iterative algorithm. The result is a function that bounds the domain of attraction, free from the usual restriction of being positive and decrescent in all the interior of its level sets.
Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua
2012-01-01
Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094
NASA Astrophysics Data System (ADS)
Ham, J.-Y.; Lee, J.
2016-09-01
We calculate the Chern-Simons invariants of twist-knot orbifolds using the Schläfli formula for the generalized Chern-Simons function on the family of twist knot cone-manifold structures. Following the general instruction of Hilden, Lozano, and Montesinos-Amilibia, we here present concrete formulae and calculations. We use the Pythagorean Theorem, which was used by Ham, Mednykh and Petrov, to relate the complex length of the longitude and the complex distance between the two axes fixed by two generators. As an application, we calculate the Chern-Simons invariants of cyclic coverings of the hyperbolic twist-knot orbifolds. We also derive some interesting results. The explicit formulae of the A-polynomials of twist knots are obtained from the complex distance polynomials. Hence the edge polynomials corresponding to the edges of the Newton polygons of the A-polynomials of twist knots can be obtained. In particular, the number of boundary components of every incompressible surface corresponding to slope -4n+2 turns out to be 2. Bibliography: 39 titles.
Polynomial probability distribution estimation using the method of moments
Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949
Phase unwrapping algorithm using polynomial phase approximation and linear Kalman filter.
Kulkarni, Rishikesh; Rastogi, Pramod
2018-02-01
A noise-robust phase unwrapping algorithm is proposed based on state space analysis and polynomial phase approximation using wrapped phase measurement. The true phase is approximated as a two-dimensional first order polynomial function within a small sized window around each pixel. The estimates of polynomial coefficients provide the measurement of phase and local fringe frequencies. A state space representation of spatial phase evolution and the wrapped phase measurement is considered with the state vector consisting of polynomial coefficients as its elements. Instead of using the traditional nonlinear Kalman filter for the purpose of state estimation, we propose to use the linear Kalman filter operating directly with the wrapped phase measurement. The adaptive window width is selected at each pixel based on the local fringe density to strike a balance between the computation time and the noise robustness. In order to retrieve the unwrapped phase, either a line-scanning approach or a quality guided strategy of pixel selection is used depending on the underlying continuous or discontinuous phase distribution, respectively. Simulation and experimental results are provided to demonstrate the applicability of the proposed method.
Polynomial probability distribution estimation using the method of moments.
Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.
Local Composite Quantile Regression Smoothing for Harris Recurrent Markov Processes
Li, Degui; Li, Runze
2016-01-01
In this paper, we study the local polynomial composite quantile regression (CQR) smoothing method for the nonlinear and nonparametric models under the Harris recurrent Markov chain framework. The local polynomial CQR regression method is a robust alternative to the widely-used local polynomial method, and has been well studied in stationary time series. In this paper, we relax the stationarity restriction on the model, and allow that the regressors are generated by a general Harris recurrent Markov process which includes both the stationary (positive recurrent) and nonstationary (null recurrent) cases. Under some mild conditions, we establish the asymptotic theory for the proposed local polynomial CQR estimator of the mean regression function, and show that the convergence rate for the estimator in nonstationary case is slower than that in stationary case. Furthermore, a weighted type local polynomial CQR estimator is provided to improve the estimation efficiency, and a data-driven bandwidth selection is introduced to choose the optimal bandwidth involved in the nonparametric estimators. Finally, we give some numerical studies to examine the finite sample performance of the developed methodology and theory. PMID:27667894
Beyond the excised ensemble: modelling elliptic curve L-functions with random matrices
NASA Astrophysics Data System (ADS)
Cooper, I. A.; Morris, Patrick W.; Snaith, N. C.
2016-02-01
The ‘excised ensemble’, a random matrix model for the zeros of quadratic twist families of elliptic curve L-functions, was introduced by Dueñez et al (2012 J. Phys. A: Math. Theor. 45 115207) The excised model is motivated by a formula for central values of these L-functions in a paper by Kohnen and Zagier (1981 Invent. Math. 64 175-98). This formula indicates that for a finite set of L-functions from a family of quadratic twists, the central values are all either zero or are greater than some positive cutoff. The excised model imposes this same condition on the central values of characteristic polynomials of matrices from {SO}(2N). Strangely, the cutoff on the characteristic polynomials that results in a convincing model for the L-function zeros is significantly smaller than that which we would obtain by naively transferring Kohnen and Zagier’s cutoff to the {SO}(2N) ensemble. In this current paper we investigate a modification to the excised model. It lacks the simplicity of the original excised ensemble, but it serves to explain the reason for the unexpectedly low cutoff in the original excised model. Additionally, the distribution of central L-values is ‘choppier’ than the distribution of characteristic polynomials, in the sense that it is a superposition of a series of peaks: the characteristic polynomial distribution is a smooth approximation to this. The excised model did not attempt to incorporate these successive peaks, only the initial cutoff. Here we experiment with including some of the structure of the L-value distribution. The conclusion is that a critical feature of a good model is to associate the correct mass to the first peak of the L-value distribution.
Contragenic functions on spheroidal domains
NASA Astrophysics Data System (ADS)
García-Ancona, Raybel; Morais, Joao; Porter, R. Michael
2018-05-01
We construct bases of polynomials for the spaces of square-integrable harmonic functions which are orthogonal to the monogenic and antimonogenic $\\mathbb{R}^3$-valued functions defined in a prolate or oblate spheroid.
a Unified Matrix Polynomial Approach to Modal Identification
NASA Astrophysics Data System (ADS)
Allemang, R. J.; Brown, D. L.
1998-04-01
One important current focus of modal identification is a reformulation of modal parameter estimation algorithms into a single, consistent mathematical formulation with a corresponding set of definitions and unifying concepts. Particularly, a matrix polynomial approach is used to unify the presentation with respect to current algorithms such as the least-squares complex exponential (LSCE), the polyreference time domain (PTD), Ibrahim time domain (ITD), eigensystem realization algorithm (ERA), rational fraction polynomial (RFP), polyreference frequency domain (PFD) and the complex mode indication function (CMIF) methods. Using this unified matrix polynomial approach (UMPA) allows a discussion of the similarities and differences of the commonly used methods. the use of least squares (LS), total least squares (TLS), double least squares (DLS) and singular value decomposition (SVD) methods is discussed in order to take advantage of redundant measurement data. Eigenvalue and SVD transformation methods are utilized to reduce the effective size of the resulting eigenvalue-eigenvector problem as well.
Planar harmonic polynomials of type B
NASA Astrophysics Data System (ADS)
Dunkl, Charles F.
1999-11-01
The hyperoctahedral group acting on icons/Journals/Common/BbbR" ALT="BbbR" ALIGN="TOP"/>N is the Weyl group of type B and is associated with a two-parameter family of differential-difference operators {Ti:1icons/Journals/Common/leq" ALT="leq" ALIGN="TOP"/> iicons/Journals/Common/leq" ALT="leq" ALIGN="TOP"/> N}. These operators are analogous to partial derivative operators. This paper finds all the polynomials h on icons/Journals/Common/BbbR" ALT="BbbR" ALIGN="TOP"/>N which are harmonic, icons/Journals/Common/Delta" ALT="Delta" ALIGN="TOP"/>Bh = 0 and annihilated by Ti for i>2, where the Laplacian 0305-4470/32/46/308/img1" ALT="(sum). They are given explicitly in terms of a novel basis of polynomials, defined by generating functions. The harmonic polynomials can be used to find wavefunctions for the quantum many-body spin Calogero model.
NASA Astrophysics Data System (ADS)
Doha, E. H.; Ahmed, H. M.
2005-12-01
Two formulae expressing explicitly the derivatives and moments of Al-Salam-Carlitz I polynomials of any degree and for any order in terms of Al-Salam-Carlitz I themselves are proved. Two other formulae for the expansion coefficients of general-order derivatives Dpqf(x), and for the moments xellDpqf(x), of an arbitrary function f(x) in terms of its original expansion coefficients are also obtained. Application of these formulae for solving q-difference equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Al-Salam-Carlitz I polynomials and any system of basic hypergeometric orthogonal polynomials, belonging to the q-Hahn class, is described.
Perturbations of Jacobi polynomials and piecewise hypergeometric orthogonal systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neretin, Yu A
2006-12-31
A family of non-complete orthogonal systems of functions on the ray [0,{infinity}] depending on three real parameters {alpha}, {beta}, {theta} is constructed. The elements of this system are piecewise hypergeometric functions with singularity at x=1. For {theta}=0 these functions vanish on [1,{infinity}) and the system is reduced to the Jacobi polynomials P{sub n}{sup {alpha}}{sup ,{beta}} on the interval [0,1]. In the general case the functions constructed can be regarded as an interpretation of the expressions P{sub n+{theta}}{sup {alpha}}{sup ,{beta}}. They are eigenfunctions of an exotic Sturm-Liouville boundary-value problem for the hypergeometric differential operator. The spectral measure for this problem ismore » found.« less
Coupling coefficients for tensor product representations of quantum SU(2)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groenevelt, Wolter, E-mail: w.g.m.groenevelt@tudelft.nl
2014-10-15
We study tensor products of infinite dimensional irreducible {sup *}-representations (not corepresentations) of the SU(2) quantum group. We obtain (generalized) eigenvectors of certain self-adjoint elements using spectral analysis of Jacobi operators associated to well-known q-hypergeometric orthogonal polynomials. We also compute coupling coefficients between different eigenvectors corresponding to the same eigenvalue. Since the continuous spectrum has multiplicity two, the corresponding coupling coefficients can be considered as 2 × 2-matrix-valued orthogonal functions. We compute explicitly the matrix elements of these functions. The coupling coefficients can be considered as q-analogs of Bessel functions. As a results we obtain several q-integral identities involving q-hypergeometricmore » orthogonal polynomials and q-Bessel-type functions.« less
The Coulomb problem on a 3-sphere and Heun polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bellucci, Stefano; Yeghikyan, Vahagn; Yerevan State University, Alex-Manoogian st. 1, 00025 Yerevan
2013-08-15
The paper studies the quantum mechanical Coulomb problem on a 3-sphere. We present a special parametrization of the ellipto-spheroidal coordinate system suitable for the separation of variables. After quantization we get the explicit form of the spectrum and present an algebraic equation for the eigenvalues of the Runge-Lentz vector. We also present the wave functions expressed via Heun polynomials.
A Novel Polygonal Finite Element Method: Virtual Node Method
NASA Astrophysics Data System (ADS)
Tang, X. H.; Zheng, C.; Zhang, J. H.
2010-05-01
Polygonal finite element method (PFEM), which can construct shape functions on polygonal elements, provides greater flexibility in mesh generation. However, the non-polynomial form of traditional PFEM, such as Wachspress method and Mean Value method, leads to inexact numerical integration. Since the integration technique for non-polynomial functions is immature. To overcome this shortcoming, a great number of integration points have to be used to obtain sufficiently exact results, which increases computational cost. In this paper, a novel polygonal finite element method is proposed and called as virtual node method (VNM). The features of present method can be list as: (1) It is a PFEM with polynomial form. Thereby, Hammer integral and Gauss integral can be naturally used to obtain exact numerical integration; (2) Shape functions of VNM satisfy all the requirements of finite element method. To test the performance of VNM, intensive numerical tests are carried out. It found that, in standard patch test, VNM can achieve significantly better results than Wachspress method and Mean Value method. Moreover, it is observed that VNM can achieve better results than triangular 3-node elements in the accuracy test.
An Efficient numerical method to calculate the conductivity tensor for disordered topological matter
NASA Astrophysics Data System (ADS)
Garcia, Jose H.; Covaci, Lucian; Rappoport, Tatiana G.
2015-03-01
We propose a new efficient numerical approach to calculate the conductivity tensor in solids. We use a real-space implementation of the Kubo formalism where both diagonal and off-diagonal conductivities are treated in the same footing. We adopt a formulation of the Kubo theory that is known as Bastin formula and expand the Green's functions involved in terms of Chebyshev polynomials using the kernel polynomial method. Within this method, all the computational effort is on the calculation of the expansion coefficients. It also has the advantage of obtaining both conductivities in a single calculation step and for various values of temperature and chemical potential, capturing the topology of the band-structure. Our numerical technique is very general and is suitable for the calculation of transport properties of disordered systems. We analyze how the method's accuracy varies with the number of moments used in the expansion and illustrate our approach by calculating the transverse conductivity of different topological systems. T.G.R, J.H.G and L.C. acknowledge Brazilian agencies CNPq, FAPERJ and INCT de Nanoestruturas de Carbono, Flemish Science Foundation for financial support.
ERIC Educational Resources Information Center
Rebholz, Joachim A.
2017-01-01
Graphing functions is an important topic in algebra and precalculus high school courses. The functions that are usually discussed include polynomials, rational, exponential, and trigonometric functions along with their inverses. These functions can be used to teach different aspects of function theory: domain, range, monotonicity, inverse…
Trace of totally positive algebraic integers and integer transfinite diameter
NASA Astrophysics Data System (ADS)
Flammang, V.
2009-06-01
Explicit auxiliary functions can be used in the ``Schur-Siegel- Smyth trace problem''. In the previous works, these functions were constructed only with polynomials having all their roots positive. Here, we use several polynomials with complex roots, which are found with Wu's algorithm, and we improve the known lower bounds for the absolute trace of totally positive algebraic integers. This improvement has a consequence for the search of Salem numbers that have a negative trace. The same method also gives a small improvement of the upper bound for the integer transfinite diameter of [0,1].
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.
Kurtosis Approach for Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation.
Petrović, Nikola Z; Belić, Milivoj; Zhong, Wei-Ping
2011-02-01
We obtain exact traveling wave and spatiotemporal soliton solutions to the generalized (3+1)-dimensional nonlinear Schrödinger equation with variable coefficients and polynomial Kerr nonlinearity of an arbitrarily high order. Exact solutions, given in terms of Jacobi elliptic functions, are presented for the special cases of cubic-quintic and septic models. We demonstrate that the widely used method for finding exact solutions in terms of Jacobi elliptic functions is not applicable to the nonlinear Schrödinger equation with saturable nonlinearity. ©2011 American Physical Society
Algebraic criteria for positive realness relative to the unit circle.
NASA Technical Reports Server (NTRS)
Siljak, D. D.
1973-01-01
A definition is presented of the circle positive realness of real rational functions relative to the unit circle in the complex variable plane. The problem of testing this kind of positive reality is reduced to the algebraic problem of determining the distribution of zeros of a real polynomial with respect to and on the unit circle. Such reformulation of the problem avoids the search for explicit information about imaginary poles of rational functions. The stated algebraic problem is solved by applying the polynomial criteria of Marden (1966) and Jury (1964), and a completely recursive algorithm for circle positive realness is obtained.
Finding Limit Cycles in self-excited oscillators with infinite-series damping functions
NASA Astrophysics Data System (ADS)
Das, Debapriya; Banerjee, Dhruba; Bhattacharjee, Jayanta K.
2015-03-01
In this paper we present a simple method for finding the location of limit cycles of self excited oscillators whose damping functions can be represented by some infinite convergent series. We have used standard results of first-order perturbation theory to arrive at amplitude equations. The approach has been kept pedagogic by first working out the cases of finite polynomials using elementary algebra. Then the method has been extended to various infinite polynomials, where the fixed points of the corresponding amplitude equations cannot be found out. Hopf bifurcations for systems with nonlinear powers in velocities have also been discussed.
Generalised quasiprobability distribution for Hermite polynomial squeezed states
NASA Astrophysics Data System (ADS)
Datta, Sunil; D'Souza, Richard
1996-02-01
Generalized quasiprobability distributions (QPD) for Hermite polynomial states are presented. These states are solutions of an eigenvalue equation which is quadratic in creation and annihilation operators. Analytical expressions for the QPD are presented for some special cases of the eigenvalues. For large squeezing these analytical expressions for the QPD take the form of a finite series in even Hermite functions. These expressions very transparently exhibit the transition between, P, Q and W functions corresponding to the change of the s-parameter of the QPD. Further, they clearly show the two-photon nature of the processes involved in the generation of these states.
Inelastic scattering with Chebyshev polynomials and preconditioned conjugate gradient minimization.
Temel, Burcin; Mills, Greg; Metiu, Horia
2008-03-27
We describe and test an implementation, using a basis set of Chebyshev polynomials, of a variational method for solving scattering problems in quantum mechanics. This minimum error method (MEM) determines the wave function Psi by minimizing the least-squares error in the function (H Psi - E Psi), where E is the desired scattering energy. We compare the MEM to an alternative, the Kohn variational principle (KVP), by solving the Secrest-Johnson model of two-dimensional inelastic scattering, which has been studied previously using the KVP and for which other numerical solutions are available. We use a conjugate gradient (CG) method to minimize the error, and by preconditioning the CG search, we are able to greatly reduce the number of iterations necessary; the method is thus faster and more stable than a matrix inversion, as is required in the KVP. Also, we avoid errors due to scattering off of the boundaries, which presents substantial problems for other methods, by matching the wave function in the interaction region to the correct asymptotic states at the specified energy; the use of Chebyshev polynomials allows this boundary condition to be implemented accurately. The use of Chebyshev polynomials allows for a rapid and accurate evaluation of the kinetic energy. This basis set is as efficient as plane waves but does not impose an artificial periodicity on the system. There are problems in surface science and molecular electronics which cannot be solved if periodicity is imposed, and the Chebyshev basis set is a good alternative in such situations.
A polynomial based model for cell fate prediction in human diseases.
Ma, Lichun; Zheng, Jie
2017-12-21
Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.
Orthogonal polynomials for refinable linear functionals
NASA Astrophysics Data System (ADS)
Laurie, Dirk; de Villiers, Johan
2006-12-01
A refinable linear functional is one that can be expressed as a convex combination and defined by a finite number of mask coefficients of certain stretched and shifted replicas of itself. The notion generalizes an integral weighted by a refinable function. The key to calculating a Gaussian quadrature formula for such a functional is to find the three-term recursion coefficients for the polynomials orthogonal with respect to that functional. We show how to obtain the recursion coefficients by using only the mask coefficients, and without the aid of modified moments. Our result implies the existence of the corresponding refinable functional whenever the mask coefficients are nonnegative, even when the same mask does not define a refinable function. The algorithm requires O(n^2) rational operations and, thus, can in principle deliver exact results. Numerical evidence suggests that it is also effective in floating-point arithmetic.
Sugisaki, Kenji; Yamamoto, Satoru; Nakazawa, Shigeaki; Toyota, Kazuo; Sato, Kazunobu; Shiomi, Daisuke; Takui, Takeji
2016-08-18
Quantum computers are capable to efficiently perform full configuration interaction (FCI) calculations of atoms and molecules by using the quantum phase estimation (QPE) algorithm. Because the success probability of the QPE depends on the overlap between approximate and exact wave functions, efficient methods to prepare accurate initial guess wave functions enough to have sufficiently large overlap with the exact ones are highly desired. Here, we propose a quantum algorithm to construct the wave function consisting of one configuration state function, which is suitable for the initial guess wave function in QPE-based FCI calculations of open-shell molecules, based on the addition theorem of angular momentum. The proposed quantum algorithm enables us to prepare the wave function consisting of an exponential number of Slater determinants only by a polynomial number of quantum operations.
Probing baryogenesis through the Higgs boson self-coupling
NASA Astrophysics Data System (ADS)
Reichert, M.; Eichhorn, A.; Gies, H.; Pawlowski, J. M.; Plehn, T.; Scherer, M. M.
2018-04-01
The link between a modified Higgs self-coupling and the strong first-order phase transition necessary for baryogenesis is well explored for polynomial extensions of the Higgs potential. We broaden this argument beyond leading polynomial expansions of the Higgs potential to higher polynomial terms and to nonpolynomial Higgs potentials. For our quantitative analysis we resort to the functional renormalization group, which allows us to evolve the full Higgs potential to higher scales and finite temperature. In all cases we find that a strong first-order phase transition manifests itself in an enhancement of the Higgs self-coupling by at least 50%, implying that such modified Higgs potentials should be accessible at the LHC.
Mathematical Minute: Rotating a Function Graph
ERIC Educational Resources Information Center
Bravo, Daniel; Fera, Joseph
2013-01-01
Using calculus only, we find the angles you can rotate the graph of a differentiable function about the origin and still obtain a function graph. We then apply the solution to odd and even degree polynomials.
Viewing the Roots of Polynomial Functions in Complex Variable: The Use of Geogebra and the CAS Maple
ERIC Educational Resources Information Center
Alves, Francisco Regis Vieira
2013-01-01
Admittedly, the Fundamental Theorem of Calculus-TFA holds an important role in the Complex Analysis-CA, as well as in other mathematical branches. In this article, we bring a discussion about the TFA, the Rouché's theorem and the winding number with the intention to analyze the roots of a polynomial equation. We propose also a description for a…
New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Rodemich, E. R.; Rumsey, H., Jr.; Welch, L. R.
1977-01-01
An upper bound on the rate of a binary code as a function of minimum code distance (using a Hamming code metric) is arrived at from Delsarte-MacWilliams inequalities. The upper bound so found is asymptotically less than Levenshtein's bound, and a fortiori less than Elias' bound. Appendices review properties of Krawtchouk polynomials and Q-polynomials utilized in the rigorous proofs.
Arnould, V M-R; Hammami, H; Soyeurt, H; Gengler, N
2010-09-01
Random regression test-day models using Legendre polynomials are commonly used for the estimation of genetic parameters and genetic evaluation for test-day milk production traits. However, some researchers have reported that these models present some undesirable properties such as the overestimation of variances at the edges of lactation. Describing genetic variation of saturated fatty acids expressed in milk fat might require the testing of different models. Therefore, 3 different functions were used and compared to take into account the lactation curve: (1) Legendre polynomials with the same order as currently applied for genetic model for production traits; 2) linear splines with 10 knots; and 3) linear splines with the same 10 knots reduced to 3 parameters. The criteria used were Akaike's information and Bayesian information criteria, percentage square biases, and log-likelihood function. These criteria indentified Legendre polynomials and linear splines with 10 knots reduced to 3 parameters models as the most useful. Reducing more complex models using eigenvalues seemed appealing because the resulting models are less time demanding and can reduce convergence difficulties, because convergence properties also seemed to be improved. Finally, the results showed that the reduced spline model was very similar to the Legendre polynomials model. Copyright (c) 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Tal-Ezer, Hillel
1987-01-01
During the process of solving a mathematical model numerically, there is often a need to operate on a vector v by an operator which can be expressed as f(A) while A is NxN matrix (ex: exp(A), sin(A), A sup -1). Except for very simple matrices, it is impractical to construct the matrix f(A) explicitly. Usually an approximation to it is used. In the present research, an algorithm is developed which uses a polynomial approximation to f(A). It is reduced to a problem of approximating f(z) by a polynomial in z while z belongs to the domain D in the complex plane which includes all the eigenvalues of A. This problem of approximation is approached by interpolating the function f(z) in a certain set of points which is known to have some maximal properties. The approximation thus achieved is almost best. Implementing the algorithm to some practical problem is described. Since a solution to a linear system Ax = b is x= A sup -1 b, an iterative solution to it can be regarded as a polynomial approximation to f(A) = A sup -1. Implementing the algorithm in this case is also described.
Covariance functions for body weight from birth to maturity in Nellore cows.
Boligon, A A; Mercadante, M E Z; Forni, S; Lôbo, R B; Albuquerque, L G
2010-03-01
The objective of this study was to estimate (co)variance functions using random regression models on Legendre polynomials for the analysis of repeated measures of BW from birth to adult age. A total of 82,064 records from 8,145 females were analyzed. Different models were compared. The models included additive direct and maternal effects, and animal and maternal permanent environmental effects as random terms. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of animal age (cubic regression) were considered as random covariables. Eight models with polynomials of third to sixth order were used to describe additive direct and maternal effects, and animal and maternal permanent environmental effects. Residual effects were modeled using 1 (i.e., assuming homogeneity of variances across all ages) or 5 age classes. The model with 5 classes was the best to describe the trajectory of residuals along the growth curve. The model including fourth- and sixth-order polynomials for additive direct and animal permanent environmental effects, respectively, and third-order polynomials for maternal genetic and maternal permanent environmental effects were the best. Estimates of (co)variance obtained with the multi-trait and random regression models were similar. Direct heritability estimates obtained with the random regression models followed a trend similar to that obtained with the multi-trait model. The largest estimates of maternal heritability were those of BW taken close to 240 d of age. In general, estimates of correlation between BW from birth to 8 yr of age decreased with increasing distance between ages.
NASA Astrophysics Data System (ADS)
Gosselin, Jeremy M.; Dosso, Stan E.; Cassidy, John F.; Quijano, Jorge E.; Molnar, Sheri; Dettmer, Jan
2017-10-01
This paper develops and applies a Bernstein-polynomial parametrization to efficiently represent general, gradient-based profiles in nonlinear geophysical inversion, with application to ambient-noise Rayleigh-wave dispersion data. Bernstein polynomials provide a stable parametrization in that small perturbations to the model parameters (basis-function coefficients) result in only small perturbations to the geophysical parameter profile. A fully nonlinear Bayesian inversion methodology is applied to estimate shear wave velocity (VS) profiles and uncertainties from surface wave dispersion data extracted from ambient seismic noise. The Bayesian information criterion is used to determine the appropriate polynomial order consistent with the resolving power of the data. Data error correlations are accounted for in the inversion using a parametric autoregressive model. The inversion solution is defined in terms of marginal posterior probability profiles for VS as a function of depth, estimated using Metropolis-Hastings sampling with parallel tempering. This methodology is applied to synthetic dispersion data as well as data processed from passive array recordings collected on the Fraser River Delta in British Columbia, Canada. Results from this work are in good agreement with previous studies, as well as with co-located invasive measurements. The approach considered here is better suited than `layered' modelling approaches in applications where smooth gradients in geophysical parameters are expected, such as soil/sediment profiles. Further, the Bernstein polynomial representation is more general than smooth models based on a fixed choice of gradient type (e.g. power-law gradient) because the form of the gradient is determined objectively by the data, rather than by a subjective parametrization choice.
NASA Astrophysics Data System (ADS)
Grobbelaar-Van Dalsen, Marié
2015-02-01
In this article, we are concerned with the polynomial stabilization of a two-dimensional thermoelastic Mindlin-Timoshenko plate model with no mechanical damping. The model is subject to Dirichlet boundary conditions on the elastic as well as the thermal variables. The work complements our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 64:1305-1325, 2013) on the polynomial stabilization of a Mindlin-Timoshenko model in a radially symmetric domain under Dirichlet boundary conditions on the displacement and thermal variables and free boundary conditions on the shear angle variables. In particular, our aim is to investigate the effect of the Dirichlet boundary conditions on all the variables on the polynomial decay rate of the model. By once more applying a frequency domain method in which we make critical use of an inequality for the trace of Sobolev functions on the boundary of a bounded, open connected set we show that the decay is slower than in the model considered in the cited work. A comparison of our result with our polynomial decay result for a magnetoelastic Mindlin-Timoshenko model subject to Dirichlet boundary conditions on the elastic variables in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) also indicates a correlation between the robustness of the coupling between parabolic and hyperbolic dynamics and the polynomial decay rate in the two models.
Cosmographic analysis with Chebyshev polynomials
NASA Astrophysics Data System (ADS)
Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando
2018-05-01
The limits of standard cosmography are here revised addressing the problem of error propagation during statistical analyses. To do so, we propose the use of Chebyshev polynomials to parametrize cosmic distances. In particular, we demonstrate that building up rational Chebyshev polynomials significantly reduces error propagations with respect to standard Taylor series. This technique provides unbiased estimations of the cosmographic parameters and performs significatively better than previous numerical approximations. To figure this out, we compare rational Chebyshev polynomials with Padé series. In addition, we theoretically evaluate the convergence radius of (1,1) Chebyshev rational polynomial and we compare it with the convergence radii of Taylor and Padé approximations. We thus focus on regions in which convergence of Chebyshev rational functions is better than standard approaches. With this recipe, as high-redshift data are employed, rational Chebyshev polynomials remain highly stable and enable one to derive highly accurate analytical approximations of Hubble's rate in terms of the cosmographic series. Finally, we check our theoretical predictions by setting bounds on cosmographic parameters through Monte Carlo integration techniques, based on the Metropolis-Hastings algorithm. We apply our technique to high-redshift cosmic data, using the Joint Light-curve Analysis supernovae sample and the most recent versions of Hubble parameter and baryon acoustic oscillation measurements. We find that cosmography with Taylor series fails to be predictive with the aforementioned data sets, while turns out to be much more stable using the Chebyshev approach.
NASA Astrophysics Data System (ADS)
Liffner, Joel W.; Hewa, Guna A.; Peel, Murray C.
2018-05-01
Derivation of the hypsometric curve of a catchment, and properties relating to that curve, requires both use of topographical data (commonly in the form of a Digital Elevation Model - DEM), and the estimation of a functional representation of that curve. An early investigation into catchment hypsometry concluded 3rd order polynomials sufficiently describe the hypsometric curve, without the consideration of higher order polynomials, or the sensitivity of hypsometric properties relating to the curve. Another study concluded the hypsometric integral (HI) is robust against changes in DEM resolution, a conclusion drawn from a very limited sample size. Conclusions from these earlier studies have resulted in the adoption of methods deemed to be "sufficient" in subsequent studies, in addition to assumptions that the robustness of the HI extends to other hypsometric properties. This study investigates and demonstrates the sensitivity of hypsometric properties to DEM resolution, DEM type and polynomial order through assessing differences in hypsometric properties derived from 417 catchments and sub-catchments within South Australia. The sensitivity of hypsometric properties across DEM types and polynomial orders is found to be significant, which suggests careful consideration of the methods chosen to derive catchment hypsometric information is required.
NASA Technical Reports Server (NTRS)
Lei, Ning; Xiong, Xiaoxiong
2016-01-01
The Visible Infrared Imaging Radiometer Suite (VIIRS) aboard the Suomi National Polar-orbiting Partnership (SNPP) satellite is a passive scanning radiometer and an imager, observing radiative energy from the Earth in 22 spectral bands from 0.41 to 12 microns which include 14 reflective solar bands (RSBs). Extending the formula used by the Moderate Resolution Imaging Spectroradiometer instruments, currently the VIIRS determines the sensor aperture spectral radiance through a quadratic polynomial of its detector digital count. It has been known that for the RSBs the quadratic polynomial is not adequate in the design specified spectral radiance region and using a quadratic polynomial could drastically increase the errors in the polynomial coefficients, leading to possible large errors in the determined aperture spectral radiance. In addition, it is very desirable to be able to extend the radiance calculation formula to correctly retrieve the aperture spectral radiance with the level beyond the design specified range. In order to more accurately determine the aperture spectral radiance from the observed digital count, we examine a few polynomials of the detector digital count to calculate the sensor aperture spectral radiance.
The algebra of two dimensional generalized Chebyshev-Koornwinder oscillator
NASA Astrophysics Data System (ADS)
Borzov, V. V.; Damaskinsky, E. V.
2014-10-01
In the previous works of Borzov and Damaskinsky ["Chebyshev-Koornwinder oscillator," Theor. Math. Phys. 175(3), 765-772 (2013)] and ["Ladder operators for Chebyshev-Koornwinder oscillator," in Proceedings of the Days on Diffraction, 2013], the authors have defined the oscillator-like system that is associated with the two variable Chebyshev-Koornwinder polynomials. We call this system the generalized Chebyshev-Koornwinder oscillator. In this paper, we study the properties of infinite-dimensional Lie algebra that is analogous to the Heisenberg algebra for the Chebyshev-Koornwinder oscillator. We construct the exact irreducible representation of this algebra in a Hilbert space H of functions that are defined on a region which is bounded by the Steiner hypocycloid. The functions are square-integrable with respect to the orthogonality measure for the Chebyshev-Koornwinder polynomials and these polynomials form an orthonormalized basis in the space H. The generalized oscillator which is studied in the work can be considered as the simplest nontrivial example of multiboson quantum system that is composed of three interacting oscillators.
A two-step, fourth-order method with energy preserving properties
NASA Astrophysics Data System (ADS)
Brugnano, Luigi; Iavernaro, Felice; Trigiante, Donato
2012-09-01
We introduce a family of fourth-order two-step methods that preserve the energy function of canonical polynomial Hamiltonian systems. As is the case with linear mutistep and one-leg methods, a prerogative of the new formulae is that the associated nonlinear systems to be solved at each step of the integration procedure have the very same dimension of the underlying continuous problem. The key tools in the new methods are the line integral associated with a conservative vector field (such as the one defined by a Hamiltonian dynamical system) and its discretization obtained by the aid of a quadrature formula. Energy conservation is equivalent to the requirement that the quadrature is exact, which turns out to be always the case in the event that the Hamiltonian function is a polynomial and the degree of precision of the quadrature formula is high enough. The non-polynomial case is also discussed and a number of test problems are finally presented in order to compare the behavior of the new methods to the theoretical results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borzov, V. V., E-mail: borzov.vadim@yandex.ru; Damaskinsky, E. V., E-mail: evd@pdmi.ras.ru
In the previous works of Borzov and Damaskinsky [“Chebyshev-Koornwinder oscillator,” Theor. Math. Phys. 175(3), 765–772 (2013)] and [“Ladder operators for Chebyshev-Koornwinder oscillator,” in Proceedings of the Days on Diffraction, 2013], the authors have defined the oscillator-like system that is associated with the two variable Chebyshev-Koornwinder polynomials. We call this system the generalized Chebyshev-Koornwinder oscillator. In this paper, we study the properties of infinite-dimensional Lie algebra that is analogous to the Heisenberg algebra for the Chebyshev-Koornwinder oscillator. We construct the exact irreducible representation of this algebra in a Hilbert space H of functions that are defined on a region which ismore » bounded by the Steiner hypocycloid. The functions are square-integrable with respect to the orthogonality measure for the Chebyshev-Koornwinder polynomials and these polynomials form an orthonormalized basis in the space H. The generalized oscillator which is studied in the work can be considered as the simplest nontrivial example of multiboson quantum system that is composed of three interacting oscillators.« less
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2017-12-01
Hybrid polynomial correlated function expansion (H-PCFE) is a novel metamodel formulated by coupling polynomial correlated function expansion (PCFE) and Kriging. Unlike commonly available metamodels, H-PCFE performs a bi-level approximation and hence, yields more accurate results. However, till date, it is only applicable to medium scaled problems. In order to address this apparent void, this paper presents an improved H-PCFE, referred to as locally refined hp - adaptive H-PCFE. The proposed framework computes the optimal polynomial order and important component functions of PCFE, which is an integral part of H-PCFE, by using global variance based sensitivity analysis. Optimal number of training points are selected by using distribution adaptive sequential experimental design. Additionally, the formulated model is locally refined by utilizing the prediction error, which is inherently obtained in H-PCFE. Applicability of the proposed approach has been illustrated with two academic and two industrial problems. To illustrate the superior performance of the proposed approach, results obtained have been compared with those obtained using hp - adaptive PCFE. It is observed that the proposed approach yields highly accurate results. Furthermore, as compared to hp - adaptive PCFE, significantly less number of actual function evaluations are required for obtaining results of similar accuracy.
2008-06-01
Geometry Interpolation The function space , VpH , consists of discontinuous, piecewise-polynomials. This work used a polynomial basis for VpH such...between a piecewise-constant and smooth variation of viscosity in both a one- dimensional and multi- dimensional setting. Before continuing with the ...inviscid, transonic flow past a NACA 0012 at zero angle of attack and freestream Mach number of M∞ = 0.95. The
Polynomial modal analysis of slanted lamellar gratings.
Granet, Gérard; Randriamihaja, Manjakavola Honore; Raniriharinosy, Karyl
2017-06-01
The problem of diffraction by slanted lamellar dielectric and metallic gratings in classical mounting is formulated as an eigenvalue eigenvector problem. The numerical solution is obtained by using the moment method with Legendre polynomials as expansion and test functions, which allows us to enforce in an exact manner the boundary conditions which determine the eigensolutions. Our method is successfully validated by comparison with other methods including in the case of highly slanted gratings.
Diffraction Theory for Polygonal Apertures
1988-07-01
and utilized oblate spheroidal vector wave functions, and Nomura and Katsura (1955), who employed an expansion of the hypergeometric polynomial ...21 2 - 1 4, 2 - 1 3 4k3 - 3k 8 3 - 4 factor relates directly to the orthogonality relations for the Chebyshev polynomials given below. I T(Q TieQdk...convergence. 3.1.2.2 Gaussian Illuminated Corner In the sample calculation just discussed we discovered some of the basic characteristics of the GBE
Thornton, B S; Hung, W T; Irving, J
1991-01-01
The response decay data of living cells subject to electric polarization is associated with their relaxation distribution function (RDF) and can be determined using the inverse Laplace transform method. A new polynomial, involving a series of associated Laguerre polynomials, has been used as the approximating function for evaluating the RDF, with the advantage of avoiding the usual arbitrary trial values of a particular parameter in the numerical computations. Some numerical examples are given, followed by an application to cervical tissue. It is found that the average relaxation time and the peak amplitude of the RDF exhibit higher values for tumorous cells than normal cells and might be used as parameters to differentiate them and their associated tissues.
Classes of exact Einstein Maxwell solutions
NASA Astrophysics Data System (ADS)
Komathiraj, K.; Maharaj, S. D.
2007-12-01
We find new classes of exact solutions to the Einstein Maxwell system of equations for a charged sphere with a particular choice of the electric field intensity and one of the gravitational potentials. The condition of pressure isotropy is reduced to a linear, second order differential equation which can be solved in general. Consequently we can find exact solutions to the Einstein Maxwell field equations corresponding to a static spherically symmetric gravitational potential in terms of hypergeometric functions. It is possible to find exact solutions which can be written explicitly in terms of elementary functions, namely polynomials and product of polynomials and algebraic functions. Uncharged solutions are regainable with our choice of electric field intensity; in particular we generate the Einstein universe for particular parameter values.
Mathematics of Zernike polynomials: a review.
McAlinden, Colm; McCartney, Mark; Moore, Jonathan
2011-11-01
Monochromatic aberrations of the eye principally originate from the cornea and the crystalline lens. Aberrometers operate via differing principles but function by either analysing the reflected wavefront from the retina or by analysing an image on the retina. Aberrations may be described as lower order or higher order aberrations with Zernike polynomials being the most commonly employed fitting method. The complex mathematical aspects with regards the Zernike polynomial expansion series are detailed in this review. Refractive surgery has been a key clinical application of aberrometers; however, more recently aberrometers have been used in a range of other areas ophthalmology including corneal diseases, cataract and retinal imaging. © 2011 The Authors. Clinical and Experimental Ophthalmology © 2011 Royal Australian and New Zealand College of Ophthalmologists.
Event-Triggered Fault Detection of Nonlinear Networked Systems.
Li, Hongyi; Chen, Ziran; Wu, Ligang; Lam, Hak-Keung; Du, Haiping
2017-04-01
This paper investigates the problem of fault detection for nonlinear discrete-time networked systems under an event-triggered scheme. A polynomial fuzzy fault detection filter is designed to generate a residual signal and detect faults in the system. A novel polynomial event-triggered scheme is proposed to determine the transmission of the signal. A fault detection filter is designed to guarantee that the residual system is asymptotically stable and satisfies the desired performance. Polynomial approximated membership functions obtained by Taylor series are employed for filtering analysis. Furthermore, sufficient conditions are represented in terms of sum of squares (SOSs) and can be solved by SOS tools in MATLAB environment. A numerical example is provided to demonstrate the effectiveness of the proposed results.
Matrix form of Legendre polynomials for solving linear integro-differential equations of high order
NASA Astrophysics Data System (ADS)
Kammuji, M.; Eshkuvatov, Z. K.; Yunus, Arif A. M.
2017-04-01
This paper presents an effective approximate solution of high order of Fredholm-Volterra integro-differential equations (FVIDEs) with boundary condition. Legendre truncated series is used as a basis functions to estimate the unknown function. Matrix operation of Legendre polynomials is used to transform FVIDEs with boundary conditions into matrix equation of Fredholm-Volterra type. Gauss Legendre quadrature formula and collocation method are applied to transfer the matrix equation into system of linear algebraic equations. The latter equation is solved by Gauss elimination method. The accuracy and validity of this method are discussed by solving two numerical examples and comparisons with wavelet and methods.
Kurtosis Approach Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.
An SVM model with hybrid kernels for hydrological time series
NASA Astrophysics Data System (ADS)
Wang, C.; Wang, H.; Zhao, X.; Xie, Q.
2017-12-01
Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.
Elastic strain field due to an inclusion of a polyhedral shape with a non-uniform lattice misfit
NASA Astrophysics Data System (ADS)
Nenashev, A. V.; Dvurechenskii, A. V.
2017-03-01
An analytical solution in a closed form is obtained for the three-dimensional elastic strain distribution in an unlimited medium containing an inclusion with a coordinate-dependent lattice mismatch (an eigenstrain). Quantum dots consisting of a solid solution with a spatially varying composition are examples of such inclusions. It is assumed that both the inclusion and the surrounding medium (the matrix) are elastically isotropic and have the same Young's modulus and Poisson ratio. The inclusion shape is supposed to be an arbitrary polyhedron, and the coordinate dependence of the lattice misfit, with respect to the matrix, is assumed to be a polynomial of any degree. It is shown that, both inside and outside the inclusion, the strain tensor is expressed as a sum of contributions of all faces, edges, and vertices of the inclusion. Each of these contributions, as a function of the observation point's coordinates, is a product of some polynomial and a simple analytical function, which is the solid angle subtended by the face from the observation point (for a contribution of a face), or the potential of the uniformly charged edge (for a contribution of an edge), or the distance from the vertex to the observation point (for a contribution of a vertex). The method of constructing the relevant polynomial functions is suggested. We also found out that similar expressions describe an electrostatic or gravitational potential, as well as its first and second derivatives, of a polyhedral body with a charge/mass density that depends on coordinates polynomially.
NASA Astrophysics Data System (ADS)
Briones, J. C.; Heras, V.; Abril, C.; Sinchi, E.
2017-08-01
The proper control of built heritage entails many challenges related to the complexity of heritage elements and the extent of the area to be managed, for which the available resources must be efficiently used. In this scenario, the preventive conservation approach, based on the concept that prevent is better than cure, emerges as a strategy to avoid the progressive and imminent loss of monuments and heritage sites. Regular monitoring appears as a key tool to identify timely changes in heritage assets. This research demonstrates that the supervised learning model (Support Vector Machines - SVM) is an ideal tool that supports the monitoring process detecting visible elements in aerial images such as roofs structures, vegetation and pavements. The linear, gaussian and polynomial kernel functions were tested; the lineal function provided better results over the other functions. It is important to mention that due to the high level of segmentation generated by the classification procedure, it was necessary to apply a generalization process through opening a mathematical morphological operation, which simplified the over classification for the monitored elements.
Optimization of Turbine Blade Design for Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Shyy, Wei
1998-01-01
To facilitate design optimization of turbine blade shape for reusable launching vehicles, appropriate techniques need to be developed to process and estimate the characteristics of the design variables and the response of the output with respect to the variations of the design variables. The purpose of this report is to offer insight into developing appropriate techniques for supporting such design and optimization needs. Neural network and polynomial-based techniques are applied to process aerodynamic data obtained from computational simulations for flows around a two-dimensional airfoil and a generic three- dimensional wing/blade. For the two-dimensional airfoil, a two-layered radial-basis network is designed and trained. The performances of two different design functions for radial-basis networks, one based on the accuracy requirement, whereas the other one based on the limit on the network size. While the number of neurons needed to satisfactorily reproduce the information depends on the size of the data, the neural network technique is shown to be more accurate for large data set (up to 765 simulations have been used) than the polynomial-based response surface method. For the three-dimensional wing/blade case, smaller aerodynamic data sets (between 9 to 25 simulations) are considered, and both the neural network and the polynomial-based response surface techniques improve their performance as the data size increases. It is found while the relative performance of two different network types, a radial-basis network and a back-propagation network, depends on the number of input data, the number of iterations required for radial-basis network is less than that for the back-propagation network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com; Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux; ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée
2015-04-01
In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representationmore » of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.« less
Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah
2017-03-24
Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure.
Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah
2017-01-01
Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure. PMID:28338632
Bayesian B-spline mapping for dynamic quantitative traits.
Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong
2012-04-01
Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.
Experimental Modal Analysis and Dynamic Component Synthesis. Volume 6. Software User’s Guide.
1987-12-01
generate a Complex Mode Indication Function ( CMIF ) from the measurement directory, including modifications from the measurement selection option. This...reference measurements are - included in the data set to be analyzed. The peaks in the CMIF chart indicate existing modes. Thus, the order of the the...polynomials is determined by the number of peaks found in the CMIF chart. Then, the order of the polynomials can be determined before the estimation process
Staley, James R; Burgess, Stephen
2017-05-01
Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure-outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure-outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure-outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.
Staley, James R.
2017-01-01
ABSTRACT Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure‐outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure‐outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure‐outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. PMID:28317167
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yan; Sahinidis, Nikolaos V.
2013-03-06
In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using amore » classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.« less
A model-based 3D phase unwrapping algorithm using Gegenbauer polynomials.
Langley, Jason; Zhao, Qun
2009-09-07
The application of a two-dimensional (2D) phase unwrapping algorithm to a three-dimensional (3D) phase map may result in an unwrapped phase map that is discontinuous in the direction normal to the unwrapped plane. This work investigates the problem of phase unwrapping for 3D phase maps. The phase map is modeled as a product of three one-dimensional Gegenbauer polynomials. The orthogonality of Gegenbauer polynomials and their derivatives on the interval [-1, 1] are exploited to calculate the expansion coefficients. The algorithm was implemented using two well-known Gegenbauer polynomials: Chebyshev polynomials of the first kind and Legendre polynomials. Both implementations of the phase unwrapping algorithm were tested on 3D datasets acquired from a magnetic resonance imaging (MRI) scanner. The first dataset was acquired from a homogeneous spherical phantom. The second dataset was acquired using the same spherical phantom but magnetic field inhomogeneities were introduced by an external coil placed adjacent to the phantom, which provided an additional burden to the phase unwrapping algorithm. Then Gaussian noise was added to generate a low signal-to-noise ratio dataset. The third dataset was acquired from the brain of a human volunteer. The results showed that Chebyshev implementation and the Legendre implementation of the phase unwrapping algorithm give similar results on the 3D datasets. Both implementations of the phase unwrapping algorithm compare well to PRELUDE 3D, 3D phase unwrapping software well recognized for functional MRI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konakli, Katerina, E-mail: konakli@ibk.baug.ethz.ch; Sudret, Bruno
2016-09-15
The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely themore » exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input dimension, a situation that is often encountered in real-life problems. By introducing the conditional generalization error, we further demonstrate that canonical LRA tend to outperform sparse PCE in the prediction of extreme model responses, which is critical in reliability analysis.« less
Towards a model of pion generalized parton distributions from Dyson-Schwinger equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moutarde, H.
2015-04-10
We compute the pion quark Generalized Parton Distribution H{sup q} and Double Distributions F{sup q} and G{sup q} in a coupled Bethe-Salpeter and Dyson-Schwinger approach. We use simple algebraic expressions inspired by the numerical resolution of Dyson-Schwinger and Bethe-Salpeter equations. We explicitly check the support and polynomiality properties, and the behavior under charge conjugation or time invariance of our model. We derive analytic expressions for the pion Double Distributions and Generalized Parton Distribution at vanishing pion momentum transfer at a low scale. Our model compares very well to experimental pion form factor or parton distribution function data.
Maximizing Submodular Functions under Matroid Constraints by Evolutionary Algorithms.
Friedrich, Tobias; Neumann, Frank
2015-01-01
Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function f under a given set of constraints. In this paper, we investigate the runtime of a simple single objective evolutionary algorithm called (1 + 1) EA and a multiobjective evolutionary algorithm called GSEMO until they have obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints, we show that the GSEMO achieves a (1 - 1/e)-approximation in expected polynomial time. For the case of monotone functions where the constraints are given by the intersection of K ≥ 2 matroids, we show that the (1 + 1) EA achieves a (1/k + δ)-approximation in expected polynomial time for any constant δ > 0. Turning to nonmonotone symmetric submodular functions with k ≥ 1 matroid intersection constraints, we show that the GSEMO achieves a 1/((k + 2)(1 + ε))-approximation in expected time O(n(k + 6)log(n)/ε.
The Boundary Function Method. Fundamentals
NASA Astrophysics Data System (ADS)
Kot, V. A.
2017-03-01
The boundary function method is proposed for solving applied problems of mathematical physics in the region defined by a partial differential equation of the general form involving constant or variable coefficients with a Dirichlet, Neumann, or Robin boundary condition. In this method, the desired function is defined by a power polynomial, and a boundary function represented in the form of the desired function or its derivative at one of the boundary points is introduced. Different sequences of boundary equations have been set up with the use of differential operators. Systems of linear algebraic equations constructed on the basis of these sequences allow one to determine the coefficients of a power polynomial. Constitutive equations have been derived for initial boundary-value problems of all the main types. With these equations, an initial boundary-value problem is transformed into the Cauchy problem for the boundary function. The determination of the boundary function by its derivative with respect to the time coordinate completes the solution of the problem.
NASA Astrophysics Data System (ADS)
Zhou, Chi-Chun; Dai, Wu-Sheng
2018-02-01
In statistical mechanics, for a system with a fixed number of particles, e.g. a finite-size system, strictly speaking, the thermodynamic quantity needs to be calculated in the canonical ensemble. Nevertheless, the calculation of the canonical partition function is difficult. In this paper, based on the mathematical theory of the symmetric function, we suggest a method for the calculation of the canonical partition function of ideal quantum gases, including ideal Bose, Fermi, and Gentile gases. Moreover, we express the canonical partition functions of interacting classical and quantum gases given by the classical and quantum cluster expansion methods in terms of the Bell polynomial in mathematics. The virial coefficients of ideal Bose, Fermi, and Gentile gases are calculated from the exact canonical partition function. The virial coefficients of interacting classical and quantum gases are calculated from the canonical partition function by using the expansion of the Bell polynomial, rather than calculated from the grand canonical potential.
Compactly supported Wannier functions and algebraic K -theory
NASA Astrophysics Data System (ADS)
Read, N.
2017-03-01
In a tight-binding lattice model with n orbitals (single-particle states) per site, Wannier functions are n -component vector functions of position that fall off rapidly away from some location, and such that a set of them in some sense span all states in a given energy band or set of bands; compactly supported Wannier functions are such functions that vanish outside a bounded region. They arise not only in band theory, but also in connection with tensor-network states for noninteracting fermion systems, and for flat-band Hamiltonians with strictly short-range hopping matrix elements. In earlier work, it was proved that for general complex band structures (vector bundles) or general complex Hamiltonians—that is, class A in the tenfold classification of Hamiltonians and band structures—a set of compactly supported Wannier functions can span the vector bundle only if the bundle is topologically trivial, in any dimension d of space, even when use of an overcomplete set of such functions is permitted. This implied that, for a free-fermion tensor network state with a nontrivial bundle in class A, any strictly short-range parent Hamiltonian must be gapless. Here, this result is extended to all ten symmetry classes of band structures without additional crystallographic symmetries, with the result that in general the nontrivial bundles that can arise from compactly supported Wannier-type functions are those that may possess, in each of d directions, the nontrivial winding that can occur in the same symmetry class in one dimension, but nothing else. The results are obtained from a very natural usage of algebraic K -theory, based on a ring of polynomials in e±i kx,e±i ky,..., which occur as entries in the Fourier-transformed Wannier functions.
Functional Relationship between Sucrose and a Cariogenic Biofilm Formation
Cai, Jian-Na; Jung, Ji-Eun; Dang, Minh-Huy; Kim, Mi-Ah; Yi, Ho-Keun; Jeon, Jae-Gyu
2016-01-01
Sucrose is an important dietary factor in cariogenic biofilm formation and subsequent initiation of dental caries. This study investigated the functional relationships between sucrose concentration and Streptococcus mutans adherence and biofilm formation. Changes in morphological characteristics of the biofilms with increasing sucrose concentration were also evaluated. S. mutans biofilms were formed on saliva-coated hydroxyapatite discs in culture medium containing 0, 0.05, 0.1, 0.5, 1, 2, 5, 10, 20, or 40% (w/v) sucrose. The adherence (in 4-hour biofilms) and biofilm composition (in 46-hour biofilms) of the biofilms were analyzed using microbiological, biochemical, laser scanning confocal fluorescence microscopic, and scanning electron microscopic methods. To determine the relationships, 2nd order polynomial curve fitting was performed. In this study, the influence of sucrose on bacterial adhesion, biofilm composition (dry weight, bacterial counts, and water-insoluble extracellular polysaccharide (EPS) content), and acidogenicity followed a 2nd order polynomial curve with concentration dependence, and the maximum effective concentrations (MECs) of sucrose ranged from 0.45 to 2.4%. The bacterial and EPS bio-volume and thickness in the biofilms also gradually increased and then decreased as sucrose concentration increased. Furthermore, the size and shape of the micro-colonies of the biofilms depended on the sucrose concentration. Around the MECs, the micro-colonies were bigger and more homogeneous than those at 0 and 40%, and were surrounded by enough EPSs to support their structure. These results suggest that the relationship between sucrose concentration and cariogenic biofilm formation in the oral cavity could be described by a functional relationship. PMID:27275603
Optimal bounds and extremal trajectories for time averages in dynamical systems
NASA Astrophysics Data System (ADS)
Tobasco, Ian; Goluskin, David; Doering, Charles
2017-11-01
For systems governed by differential equations it is natural to seek extremal solution trajectories, maximizing or minimizing the long-time average of a given quantity of interest. A priori bounds on optima can be proved by constructing auxiliary functions satisfying certain point-wise inequalities, the verification of which does not require solving the underlying equations. We prove that for any bounded autonomous ODE, the problems of finding extremal trajectories on the one hand and optimal auxiliary functions on the other are strongly dual in the sense of convex duality. As a result, auxiliary functions provide arbitrarily sharp bounds on optimal time averages. Furthermore, nearly optimal auxiliary functions provide volumes in phase space where maximal and nearly maximal trajectories must lie. For polynomial systems, such functions can be constructed by semidefinite programming. We illustrate these ideas using the Lorenz system, producing explicit volumes in phase space where extremal trajectories are guaranteed to reside. Supported by NSF Award DMS-1515161, Van Loo Postdoctoral Fellowships, and the John Simon Guggenheim Foundation.
Bilenko, Natalia Y; Gallant, Jack L
2016-01-01
In this article we introduce Pyrcca, an open-source Python package for performing canonical correlation analysis (CCA). CCA is a multivariate analysis method for identifying relationships between sets of variables. Pyrcca supports CCA with or without regularization, and with or without linear, polynomial, or Gaussian kernelization. We first use an abstract example to describe Pyrcca functionality. We then demonstrate how Pyrcca can be used to analyze neuroimaging data. Specifically, we use Pyrcca to implement cross-subject comparison in a natural movie functional magnetic resonance imaging (fMRI) experiment by finding a data-driven set of functional response patterns that are similar across individuals. We validate this cross-subject comparison method in Pyrcca by predicting responses to novel natural movies across subjects. Finally, we show how Pyrcca can reveal retinotopic organization in brain responses to natural movies without the need for an explicit model.
Bilenko, Natalia Y.; Gallant, Jack L.
2016-01-01
In this article we introduce Pyrcca, an open-source Python package for performing canonical correlation analysis (CCA). CCA is a multivariate analysis method for identifying relationships between sets of variables. Pyrcca supports CCA with or without regularization, and with or without linear, polynomial, or Gaussian kernelization. We first use an abstract example to describe Pyrcca functionality. We then demonstrate how Pyrcca can be used to analyze neuroimaging data. Specifically, we use Pyrcca to implement cross-subject comparison in a natural movie functional magnetic resonance imaging (fMRI) experiment by finding a data-driven set of functional response patterns that are similar across individuals. We validate this cross-subject comparison method in Pyrcca by predicting responses to novel natural movies across subjects. Finally, we show how Pyrcca can reveal retinotopic organization in brain responses to natural movies without the need for an explicit model. PMID:27920675
Higher-order Fourier analysis over finite fields and applications
NASA Astrophysics Data System (ADS)
Hatami, Pooya
Higher-order Fourier analysis is a powerful tool in the study of problems in additive and extremal combinatorics, for instance the study of arithmetic progressions in primes, where the traditional Fourier analysis comes short. In recent years, higher-order Fourier analysis has found multiple applications in computer science in fields such as property testing and coding theory. In this thesis, we develop new tools within this theory with several new applications such as a characterization theorem in algebraic property testing. One of our main contributions is a strong near-equidistribution result for regular collections of polynomials. The densities of small linear structures in subsets of Abelian groups can be expressed as certain analytic averages involving linear forms. Higher-order Fourier analysis examines such averages by approximating the indicator function of a subset by a function of bounded number of polynomials. Then, to approximate the average, it suffices to know the joint distribution of the polynomials applied to the linear forms. We prove a near-equidistribution theorem that describes these distributions for the group F(n/p) when p is a fixed prime. This fundamental fact was previously known only under various extra assumptions about the linear forms or the field size. We use this near-equidistribution theorem to settle a conjecture of Gowers and Wolf on the true complexity of systems of linear forms. Our next application is towards a characterization of testable algebraic properties. We prove that every locally characterized affine-invariant property of functions f : F(n/p) → R with n∈ N, is testable. In fact, we prove that any such property P is proximity-obliviously testable. More generally, we show that any affine-invariant property that is closed under subspace restrictions and has "bounded complexity" is testable. We also prove that any property that can be described as the property of decomposing into a known structure of low-degree polynomials is locally characterized and is, hence, testable. We discuss several notions of regularity which allow us to deduce algorithmic versions of various regularity lemmas for polynomials by Green and Tao and by Kaufman and Lovett. We show that our algorithmic regularity lemmas for polynomials imply algorithmic versions of several results relying on regularity, such as decoding Reed-Muller codes beyond the list decoding radius (for certain structured errors), and prescribed polynomial decompositions. Finally, motivated by the definition of Gowers norms, we investigate norms defined by different systems of linear forms. We give necessary conditions on the structure of systems of linear forms that define norms. We prove that such norms can be one of only two types, and assuming that |F p| is sufficiently large, they essentially are equivalent to either a Gowers norm or Lp norms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gashkov, Sergey B; Sergeev, Igor' S
2012-10-31
This work suggests a method for deriving lower bounds for the complexity of polynomials with positive real coefficients implemented by circuits of functional elements over the monotone arithmetic basis {l_brace}x+y, x {center_dot} y{r_brace} Union {l_brace}a {center_dot} x | a Element-Of R{sub +}{r_brace}. Using this method, several new results are obtained. In particular, we construct examples of polynomials of degree m-1 in each of the n variables with coefficients 0 and 1 having additive monotone complexity m{sup (1-o(1))n} and multiplicative monotone complexity m{sup (1/2-o(1))n} as m{sup n}{yields}{infinity}. In this form, the lower bounds derived here are sharp. Bibliography: 72 titles.
The leading term of the Plancherel-Rotach asymptotic formula for solutions of recurrence relations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aptekarev, A I; Tulyakov, D N
Recurrence relations generating Padé and Hermite-Padé polynomials are considered. Their coefficients increase with the index of the relation, but after dividing by an appropriate power of the scaling function they tend to a finite limit. As a result, after scaling the polynomials 'stabilize' for large indices; this type of asymptotic behaviour is called Plancherel-Rotach asymptotics. An explicit expression for the leading term of the asymptotic formula, which is valid outside sets containing the zeros of the polynomials, is obtained for wide classes of three- and four-term relations. For three-term recurrence relations this result generalizes a theorem Van Assche obtained for recurrence relations withmore » 'regularly' growing coefficients. Bibliography: 19 titles.« less
An Exactly Solvable Spin Chain Related to Hahn Polynomials
NASA Astrophysics Data System (ADS)
Stoilova, Neli I.; van der Jeugt, Joris
2011-03-01
We study a linear spin chain which was originally introduced by Shi et al. [Phys. Rev. A 71 (2005), 032309, 5 pages], for which the coupling strength contains a parameter α and depends on the parity of the chain site. Extending the model by a second parameter β, it is shown that the single fermion eigenstates of the Hamiltonian can be computed in explicit form. The components of these eigenvectors turn out to be Hahn polynomials with parameters (α,β) and (α+1,β-1). The construction of the eigenvectors relies on two new difference equations for Hahn polynomials. The explicit knowledge of the eigenstates leads to a closed form expression for the correlation function of the spin chain. We also discuss some aspects of a q-extension of this model.
Scaling Property of Period-n-Tupling Sequences in One-Dimensional Mappings
NASA Astrophysics Data System (ADS)
Zeng, Wan-Zhen; Hao, Bai-Lin; Wang, Guang-Rui; Chen, Shi-Gang
1984-05-01
We calculated the universal scaling function g(x) and the scaling factor α as well as the convergence rate δ for periodtripling, -quadrapling and-quintupling sequences of RL, RL^2, RLR^2, RL2 R and RL^3 types. The superstable periods are closely connected to a set of polynomial P_n defined recursively by the original mapping. Some notable properties of these polynomials are studied. Several approaches to solving the renormalization group equation and estimating the scaling factors are suggested.
Quadratures with multiple nodes, power orthogonality, and moment-preserving spline approximation
NASA Astrophysics Data System (ADS)
Milovanovic, Gradimir V.
2001-01-01
Quadrature formulas with multiple nodes, power orthogonality, and some applications of such quadratures to moment-preserving approximation by defective splines are considered. An account on power orthogonality (s- and [sigma]-orthogonal polynomials) and generalized Gaussian quadratures with multiple nodes, including stable algorithms for numerical construction of the corresponding polynomials and Cotes numbers, are given. In particular, the important case of Chebyshev weight is analyzed. Finally, some applications in moment-preserving approximation of functions by defective splines are discussed.
Monograph on the use of the multivariate Gram Charlier series Type A
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hatayodom, T.; Heydt, G.
1978-01-01
The Gram-Charlier series in an infinite series expansion for a probability density function (pdf) in which terms of the series are Hermite polynomials. There are several Gram-Charlier series - the best known is Type A. The Gram-Charlier series, Type A (GCA) exists for both univariate and multivariate random variables. This monograph introduces the multivariate GCA and illustrates its use through several examples. A brief bibliography and discussion of Hermite polynomials is also included. 9 figures, 2 tables.
NASA Astrophysics Data System (ADS)
Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.
1988-10-01
A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.
Comparison of volatility function technique for risk-neutral densities estimation
NASA Astrophysics Data System (ADS)
Bahaludin, Hafizah; Abdullah, Mimi Hafizah
2017-08-01
Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.
Direct discriminant locality preserving projection with Hammerstein polynomial expansion.
Chen, Xi; Zhang, Jiashu; Li, Defang
2012-12-01
Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.
Rational trigonometric approximations using Fourier series partial sums
NASA Technical Reports Server (NTRS)
Geer, James F.
1993-01-01
A class of approximations (S(sub N,M)) to a periodic function f which uses the ideas of Pade, or rational function, approximations based on the Fourier series representation of f, rather than on the Taylor series representation of f, is introduced and studied. Each approximation S(sub N,M) is the quotient of a trigonometric polynomial of degree N and a trigonometric polynomial of degree M. The coefficients in these polynomials are determined by requiring that an appropriate number of the Fourier coefficients of S(sub N,M) agree with those of f. Explicit expressions are derived for these coefficients in terms of the Fourier coefficients of f. It is proven that these 'Fourier-Pade' approximations converge point-wise to (f(x(exp +))+f(x(exp -)))/2 more rapidly (in some cases by a factor of 1/k(exp 2M)) than the Fourier series partial sums on which they are based. The approximations are illustrated by several examples and an application to the solution of an initial, boundary value problem for the simple heat equation is presented.
Fabrication and correction of freeform surface based on Zernike polynomials by slow tool servo
NASA Astrophysics Data System (ADS)
Cheng, Yuan-Chieh; Hsu, Ming-Ying; Peng, Wei-Jei; Hsu, Wei-Yao
2017-10-01
Recently, freeform surface widely using to the optical system; because it is have advance of optical image and freedom available to improve the optical performance. For freeform optical fabrication by integrating freeform optical design, precision freeform manufacture, metrology freeform optics and freeform compensate method, to modify the form deviation of surface, due to production process of freeform lens ,compared and provides more flexibilities and better performance. This paper focuses on the fabrication and correction of the free-form surface. In this study, optical freeform surface using multi-axis ultra-precision manufacturing could be upgrading the quality of freeform. It is a machine equipped with a positioning C-axis and has the CXZ machining function which is also called slow tool servo (STS) function. The freeform compensate method of Zernike polynomials results successfully verified; it is correction the form deviation of freeform surface. Finally, the freeform surface are measured experimentally by Ultrahigh Accurate 3D Profilometer (UA3P), compensate the freeform form error with Zernike polynomial fitting to improve the form accuracy of freeform.
NASA Astrophysics Data System (ADS)
Dobronets, Boris S.; Popova, Olga A.
2018-05-01
The paper considers a new approach of regression modeling that uses aggregated data presented in the form of density functions. Approaches to Improving the reliability of aggregation of empirical data are considered: improving accuracy and estimating errors. We discuss the procedures of data aggregation as a preprocessing stage for subsequent to regression modeling. An important feature of study is demonstration of the way how represent the aggregated data. It is proposed to use piecewise polynomial models, including spline aggregate functions. We show that the proposed approach to data aggregation can be interpreted as the frequency distribution. To study its properties density function concept is used. Various types of mathematical models of data aggregation are discussed. For the construction of regression models, it is proposed to use data representation procedures based on piecewise polynomial models. New approaches to modeling functional dependencies based on spline aggregations are proposed.
NASA Astrophysics Data System (ADS)
Liu, Huawei; Zheng, Shu; Zhou, Huaichun; Qi, Chaobo
2016-02-01
A generalized method to estimate a two-dimensional (2D) distribution of temperature and wavelength-dependent emissivity in a sooty flame with spectroscopic radiation intensities is proposed in this paper. The method adopts a Newton-type iterative method to solve the unknown coefficients in the polynomial relationship between the emissivity and the wavelength, as well as the unknown temperature. Polynomial functions with increasing order are examined, and final results are determined as the result converges. Numerical simulation on a fictitious flame with wavelength-dependent absorption coefficients shows a good performance with relative errors less than 0.5% in the average temperature. What’s more, a hyper-spectral imaging device is introduced to measure an ethylene/air laminar diffusion flame with the proposed method. The proper order for the polynomial function is selected to be 2, because every one order increase in the polynomial function will only bring in a temperature variation smaller than 20 K. For the ethylene laminar diffusion flame with 194 ml min-1 C2H4 and 284 L min-1 air studied in this paper, the 2D distribution of average temperature estimated along the line of sight is similar to, but smoother than that of the local temperature given in references, and the 2D distribution of emissivity shows a cumulative effect of the absorption coefficient along the line of sight. It also shows that emissivity of the flame decreases as the wavelength increases. The emissivity under wavelength 400 nm is about 2.5 times as much as that under wavelength 1000 nm for a typical line-of-sight in the flame, with the same trend for the absorption coefficient of soot varied with the wavelength.
Carsin-Vu, Aline; Corouge, Isabelle; Commowick, Olivier; Bouzillé, Guillaume; Barillot, Christian; Ferré, Jean-Christophe; Proisy, Maia
2018-04-01
To investigate changes in cerebral blood flow (CBF) in gray matter (GM) between 6 months and 15 years of age and to provide CBF values for the brain, GM, white matter (WM), hemispheres and lobes. Between 2013 and 2016, we retrospectively included all clinical MRI examinations with arterial spin labeling (ASL). We excluded subjects with a condition potentially affecting brain perfusion. For each subject, mean values of CBF in the brain, GM, WM, hemispheres and lobes were calculated. GM CBF was fitted using linear, quadratic and cubic polynomial regression against age. Regression models were compared with Akaike's information criterion (AIC), and Likelihood Ratio tests. 84 children were included (44 females/40 males). Mean CBF values were 64.2 ± 13.8 mL/100 g/min in GM, and 29.3 ± 10.0 mL/100 g/min in WM. The best-fit model of brain perfusion was the cubic polynomial function (AIC = 672.7, versus respectively AIC = 673.9 and AIC = 674.1 with the linear negative function and the quadratic polynomial function). A statistically significant difference between the tested models demonstrating the superiority of the quadratic (p = 0.18) or cubic polynomial model (p = 0.06), over the negative linear regression model was not found. No effect of general anesthesia (p = 0.34) or of gender (p = 0.16) was found. we provided values for ASL CBF in the brain, GM, WM, hemispheres, and lobes over a wide pediatric age range, approximately showing inverted U-shaped changes in GM perfusion over the course of childhood. Copyright © 2018 Elsevier B.V. All rights reserved.
Spectral/ hp element methods: Recent developments, applications, and perspectives
NASA Astrophysics Data System (ADS)
Xu, Hui; Cantwell, Chris D.; Monteserin, Carlos; Eskilsson, Claes; Engsig-Karup, Allan P.; Sherwin, Spencer J.
2018-02-01
The spectral/ hp element method combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate a C 0 - continuous expansion. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/ hp element method and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use of the spectral/ hp element method in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/ hp element method in more complex science and engineering applications are discussed.
Wavefront aberrations of x-ray dynamical diffraction beams.
Liao, Keliang; Hong, Youli; Sheng, Weifan
2014-10-01
The effects of dynamical diffraction in x-ray diffractive optics with large numerical aperture render the wavefront aberrations difficult to describe using the aberration polynomials, yet knowledge of them plays an important role in a vast variety of scientific problems ranging from optical testing to adaptive optics. Although the diffraction theory of optical aberrations was established decades ago, its application in the area of x-ray dynamical diffraction theory (DDT) is still lacking. Here, we conduct a theoretical study on the aberration properties of x-ray dynamical diffraction beams. By treating the modulus of the complex envelope as the amplitude weight function in the orthogonalization procedure, we generalize the nonrecursive matrix method for the determination of orthonormal aberration polynomials, wherein Zernike DDT and Legendre DDT polynomials are proposed. As an example, we investigate the aberration evolution inside a tilted multilayer Laue lens. The corresponding Legendre DDT polynomials are obtained numerically, which represent balanced aberrations yielding minimum variance of the classical aberrations of an anamorphic optical system. The balancing of classical aberrations and their standard deviations are discussed. We also present the Strehl ratio of the primary and secondary balanced aberrations.
A new numerical treatment based on Lucas polynomials for 1D and 2D sinh-Gordon equation
NASA Astrophysics Data System (ADS)
Oruç, Ömer
2018-04-01
In this paper, a new mixed method based on Lucas and Fibonacci polynomials is developed for numerical solutions of 1D and 2D sinh-Gordon equations. Firstly time variable discretized by central finite difference and then unknown function and its derivatives are expanded to Lucas series. With the help of these series expansion and Fibonacci polynomials, matrices for differentiation are derived. With this approach, finding the solution of sinh-Gordon equation transformed to finding the solution of an algebraic system of equations. Lucas series coefficients are acquired by solving this system of algebraic equations. Then by plugginging these coefficients into Lucas series expansion numerical solutions can be obtained consecutively. The main objective of this paper is to demonstrate that Lucas polynomial based method is convenient for 1D and 2D nonlinear problems. By calculating L2 and L∞ error norms of some 1D and 2D test problems efficiency and performance of the proposed method is monitored. Acquired accurate results confirm the applicability of the method.
Tensor calculus in polar coordinates using Jacobi polynomials
NASA Astrophysics Data System (ADS)
Vasil, Geoffrey M.; Burns, Keaton J.; Lecoanet, Daniel; Olver, Sheehan; Brown, Benjamin P.; Oishi, Jeffrey S.
2016-11-01
Spectral methods are an efficient way to solve partial differential equations on domains possessing certain symmetries. The utility of a method depends strongly on the choice of spectral basis. In this paper we describe a set of bases built out of Jacobi polynomials, and associated operators for solving scalar, vector, and tensor partial differential equations in polar coordinates on a unit disk. By construction, the bases satisfy regularity conditions at r = 0 for any tensorial field. The coordinate singularity in a disk is a prototypical case for many coordinate singularities. The work presented here extends to other geometries. The operators represent covariant derivatives, multiplication by azimuthally symmetric functions, and the tensorial relationship between fields. These arise naturally from relations between classical orthogonal polynomials, and form a Heisenberg algebra. Other past work uses more specific polynomial bases for solving equations in polar coordinates. The main innovation in this paper is to use a larger set of possible bases to achieve maximum bandedness of linear operations. We provide a series of applications of the methods, illustrating their ease-of-use and accuracy.
Ye, Jingfei; Gao, Zhishan; Wang, Shuai; Cheng, Jinlong; Wang, Wei; Sun, Wenqing
2014-10-01
Four orthogonal polynomials for reconstructing a wavefront over a square aperture based on the modal method are currently available, namely, the 2D Chebyshev polynomials, 2D Legendre polynomials, Zernike square polynomials and Numerical polynomials. They are all orthogonal over the full unit square domain. 2D Chebyshev polynomials are defined by the product of Chebyshev polynomials in x and y variables, as are 2D Legendre polynomials. Zernike square polynomials are derived by the Gram-Schmidt orthogonalization process, where the integration region across the full unit square is circumscribed outside the unit circle. Numerical polynomials are obtained by numerical calculation. The presented study is to compare these four orthogonal polynomials by theoretical analysis and numerical experiments from the aspects of reconstruction accuracy, remaining errors, and robustness. Results show that the Numerical orthogonal polynomial is superior to the other three polynomials because of its high accuracy and robustness even in the case of a wavefront with incomplete data.
Pereira, R J; Bignardi, A B; El Faro, L; Verneque, R S; Vercesi Filho, A E; Albuquerque, L G
2013-01-01
Studies investigating the use of random regression models for genetic evaluation of milk production in Zebu cattle are scarce. In this study, 59,744 test-day milk yield records from 7,810 first lactations of purebred dairy Gyr (Bos indicus) and crossbred (dairy Gyr × Holstein) cows were used to compare random regression models in which additive genetic and permanent environmental effects were modeled using orthogonal Legendre polynomials or linear spline functions. Residual variances were modeled considering 1, 5, or 10 classes of days in milk. Five classes fitted the changes in residual variances over the lactation adequately and were used for model comparison. The model that fitted linear spline functions with 6 knots provided the lowest sum of residual variances across lactation. On the other hand, according to the deviance information criterion (DIC) and bayesian information criterion (BIC), a model using third-order and fourth-order Legendre polynomials for additive genetic and permanent environmental effects, respectively, provided the best fit. However, the high rank correlation (0.998) between this model and that applying third-order Legendre polynomials for additive genetic and permanent environmental effects, indicates that, in practice, the same bulls would be selected by both models. The last model, which is less parameterized, is a parsimonious option for fitting dairy Gyr breed test-day milk yield records. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Myocardial strains from 3D displacement encoded magnetic resonance imaging
2012-01-01
Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE), make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts. PMID:22533791
Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network
Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N.
2015-01-01
Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead. PMID:26426701
Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network.
Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N
2015-01-01
Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead.
Finding the Best Quadratic Approximation of a Function
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2011-01-01
This article examines the question of finding the best quadratic function to approximate a given function on an interval. The prototypical function considered is f(x) = e[superscript x]. Two approaches are considered, one based on Taylor polynomial approximations at various points in the interval under consideration, the other based on the fact…
Von Bertalanffy's dynamics under a polynomial correction: Allee effect and big bang bifurcation
NASA Astrophysics Data System (ADS)
Leonel Rocha, J.; Taha, A. K.; Fournier-Prunaret, D.
2016-02-01
In this work we consider new one-dimensional populational discrete dynamical systems in which the growth of the population is described by a family of von Bertalanffy's functions, as a dynamical approach to von Bertalanffy's growth equation. The purpose of introducing Allee effect in those models is satisfied under a correction factor of polynomial type. We study classes of von Bertalanffy's functions with different types of Allee effect: strong and weak Allee's functions. Dependent on the variation of four parameters, von Bertalanffy's functions also includes another class of important functions: functions with no Allee effect. The complex bifurcation structures of these von Bertalanffy's functions is investigated in detail. We verified that this family of functions has particular bifurcation structures: the big bang bifurcation of the so-called “box-within-a-box” type. The big bang bifurcation is associated to the asymptotic weight or carrying capacity. This work is a contribution to the study of the big bang bifurcation analysis for continuous maps and their relationship with explosion birth and extinction phenomena.
BPS counting for knots and combinatorics on words
NASA Astrophysics Data System (ADS)
Kucharski, Piotr; Sułkowski, Piotr
2016-11-01
We discuss relations between quantum BPS invariants defined in terms of a product decomposition of certain series, and difference equations (quantum A-polynomials) that annihilate such series. We construct combinatorial models whose structure is encoded in the form of such difference equations, and whose generating functions (Hilbert-Poincaré series) are solutions to those equations and reproduce generating series that encode BPS invariants. Furthermore, BPS invariants in question are expressed in terms of Lyndon words in an appropriate language, thereby relating counting of BPS states to the branch of mathematics referred to as combinatorics on words. We illustrate these results in the framework of colored extremal knot polynomials: among others we determine dual quantum extremal A-polynomials for various knots, present associated combinatorial models, find corresponding BPS invariants (extremal Labastida-Mariño-Ooguri-Vafa invariants) and discuss their integrality.
Teodoro, Tiago Quevedo; Visscher, Lucas; da Silva, Albérico Borges Ferreira; Haiduke, Roberto Luiz Andrade
2017-03-14
The f-block elements are addressed in this third part of a series of prolapse-free basis sets of quadruple-ζ quality (RPF-4Z). Relativistic adapted Gaussian basis sets (RAGBSs) are used as primitive sets of functions while correlating/polarization (C/P) functions are chosen by analyzing energy lowerings upon basis set increments in Dirac-Coulomb multireference configuration interaction calculations with single and double excitations of the valence spinors. These function exponents are obtained by applying the RAGBS parameters in a polynomial expression. Moreover, through the choice of C/P characteristic exponents from functions of lower angular momentum spaces, a reduction in the computational demand is attained in relativistic calculations based on the kinetic balance condition. The present study thus complements the RPF-4Z sets for the whole periodic table (Z ≤ 118). The sets are available as Supporting Information and can also be found at http://basis-sets.iqsc.usp.br .
Poly-Frobenius-Euler polynomials
NASA Astrophysics Data System (ADS)
Kurt, Burak
2017-07-01
Hamahata [3] defined poly-Euler polynomials and the generalized poly-Euler polynomials. He proved some relations and closed formulas for the poly-Euler polynomials. By this motivation, we define poly-Frobenius-Euler polynomials. We give some relations for this polynomials. Also, we prove the relationships between poly-Frobenius-Euler polynomials and Stirling numbers of the second kind.
NASA Astrophysics Data System (ADS)
Yu, Yong; Wang, Jun
Wheat, pretreated by 60Co gamma irradiation, was dried by hot-air with irradiation dosage 0-3 kGy, drying temperature 40-60 °C, and initial moisture contents 19-25% (drying basis). The drying characteristics and dried qualities of wheat were evaluated based on drying time, average dehydration rate, wet gluten content (WGC), moisture content of wet gluten (MCWG)and titratable acidity (TA). A quadratic rotation-orthogonal composite experimental design, with three variables (at five levels) and five response functions, and analysis method were employed to study the effect of three variables on the individual response functions. The five response functions (drying time, average dehydration rate, WGC, MCWG, TA) correlated with these variables by second order polynomials consisting of linear, quadratic and interaction terms. A high correlation coefficient indicated the suitability of the second order polynomial to predict these response functions. The linear, interaction and quadratic effects of three variables on the five response functions were all studied.
On the modular structure of the genus-one Type II superstring low energy expansion
NASA Astrophysics Data System (ADS)
D'Hoker, Eric; Green, Michael B.; Vanhove, Pierre
2015-08-01
The analytic contribution to the low energy expansion of Type II string amplitudes at genus-one is a power series in space-time derivatives with coefficients that are determined by integrals of modular functions over the complex structure modulus of the world-sheet torus. These modular functions are associated with world-sheet vacuum Feynman diagrams and given by multiple sums over the discrete momenta on the torus. In this paper we exhibit exact differential and algebraic relations for a certain infinite class of such modular functions by showing that they satisfy Laplace eigenvalue equations with inhomogeneous terms that are polynomial in non-holomorphic Eisenstein series. Furthermore, we argue that the set of modular functions that contribute to the coefficients of interactions up to order are linear sums of functions in this class and quadratic polynomials in Eisenstein series and odd Riemann zeta values. Integration over the complex structure results in coefficients of the low energy expansion that are rational numbers multiplying monomials in odd Riemann zeta values.
A new basis set for molecular bending degrees of freedom.
Jutier, Laurent
2010-07-21
We present a new basis set as an alternative to Legendre polynomials for the variational treatment of bending vibrational degrees of freedom in order to highly reduce the number of basis functions. This basis set is inspired from the harmonic oscillator eigenfunctions but is defined for a bending angle in the range theta in [0:pi]. The aim is to bring the basis functions closer to the final (ro)vibronic wave functions nature. Our methodology is extended to complicated potential energy surfaces, such as quasilinearity or multiequilibrium geometries, by using several free parameters in the basis functions. These parameters allow several density maxima, linear or not, around which the basis functions will be mainly located. Divergences at linearity in integral computations are resolved as generalized Legendre polynomials. All integral computations required for the evaluation of molecular Hamiltonian matrix elements are given for both discrete variable representation and finite basis representation. Convergence tests for the low energy vibronic states of HCCH(++), HCCH(+), and HCCS are presented.
Multi-Vehicle Function Tracking by Moment Matching
NASA Astrophysics Data System (ADS)
Avant, Trevor
The evolution of many natural and man-made environmental events can be represented as scalar functions of time and space. Examples include the boundary and intensity of wildfires, of waste spills in bodies of water, and of natural emissions of methane from the earth. The difficult task of understanding and monitoring these processes can be accomplished through the use of coordinated groups of vehicles. This thesis devises a method to determine positions of the members of a group of vehicles in the domain of a scalar function which lead to effective sensing of the function. This method involves equating the moments of a scalar function to the moments of a group of positions, which results in a system of polynomial equations to be solved. This methodology also allows for other explicit geometric constraints, in the form of polynomial equations, to be imposed on the vehicles. Several example simulations are shown to demonstrate the advantages and challenges associated with the moment matching technique.
NASA Astrophysics Data System (ADS)
Xie, Xiang; Zheng, Hui; Qu, Yegao
2016-07-01
A weak form variational based method is developed to study the vibro-acoustic responses of coupled structural-acoustic system consisting of an irregular acoustic cavity with general wall impedance and a flexible panel subjected to arbitrary edge-supporting conditions. The structural and acoustical models of the coupled system are formulated on the basis of a modified variational method combined with multi-segment partitioning strategy. Meanwhile, the continuity constraints on the sub-segment interfaces are further incorporated into the system stiffness matrix by means of least-squares weighted residual method. Orthogonal polynomials, such as Chebyshev polynomials of the first kind, are employed as the wholly admissible unknown displacement and sound pressure field variables functions for separate components without meshing, and hence mapping the irregular physical domain into a square spectral domain is necessary. The effects of weighted parameter together with the number of truncated polynomial terms and divided partitions on the accuracy of present theoretical solutions are investigated. It is observed that applying this methodology, accurate and efficient predictions can be obtained for various types of coupled panel-cavity problems; and in weak or strong coupling cases for a panel surrounded by a light or heavy fluid, the inherent principle of velocity continuity on the panel-cavity contacting interface can all be handled satisfactorily. Key parametric studies concerning the influences of the geometrical properties as well as impedance boundary are performed. Finally, by performing the vibro-acoustic analyses of 3D car-like coupled miniature, we demonstrate that the present method seems to be an excellent way to obtain accurate mid-frequency solution with an acceptable CPU time.
Determination of the paraxial focal length using Zernike polynomials over different apertures
NASA Astrophysics Data System (ADS)
Binkele, Tobias; Hilbig, David; Henning, Thomas; Fleischmann, Friedrich
2017-02-01
The paraxial focal length is still the most important parameter in the design of a lens. As presented at the SPIE Optics + Photonics 2016, the measured focal length is a function of the aperture. The paraxial focal length can be found when the aperture approaches zero. In this work, we investigate the dependency of the Zernike polynomials on the aperture size with respect to 3D space. By this, conventional wavefront measurement systems that apply Zernike polynomial fitting (e.g. Shack-Hartmann-Sensor) can be used to determine the paraxial focal length, too. Since the Zernike polynomials are orthogonal over a unit circle, the aperture used in the measurement has to be normalized. By shrinking the aperture and keeping up with the normalization, the Zernike coefficients change. The relation between these changes and the paraxial focal length are investigated. The dependency of the focal length on the aperture size is derived analytically and evaluated by simulation and measurement of a strong focusing lens. The measurements are performed using experimental ray tracing and a Shack-Hartmann-Sensor. Using experimental ray tracing for the measurements, the aperture can be chosen easily. Regarding the measurements with the Shack-Hartmann- Sensor, the aperture size is fixed. Thus, the Zernike polynomials have to be adapted to use different aperture sizes by the proposed method. By doing this, the paraxial focal length can be determined from the measurements in both cases.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2016-10-14
A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.
Spectral likelihood expansions for Bayesian inference
NASA Astrophysics Data System (ADS)
Nagel, Joseph B.; Sudret, Bruno
2016-03-01
A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.
The Cauchy Two-Matrix Model, C-Toda Lattice and CKP Hierarchy
NASA Astrophysics Data System (ADS)
Li, Chunxia; Li, Shi-Hao
2018-06-01
This paper mainly talks about the Cauchy two-matrix model and its corresponding integrable hierarchy with the help of orthogonal polynomial theory and Toda-type equations. Starting from the symmetric reduction in Cauchy biorthogonal polynomials, we derive the Toda equation of CKP type (or the C-Toda lattice) as well as its Lax pair by introducing time flows. Then, matrix integral solutions to the C-Toda lattice are extended to give solutions to the CKP hierarchy which reveals the time-dependent partition function of the Cauchy two-matrix model is nothing but the τ -function of the CKP hierarchy. At last, the connection between the Cauchy two-matrix model and Bures ensemble is established from the point of view of integrable systems.
A comparison between space-time video descriptors
NASA Astrophysics Data System (ADS)
Costantini, Luca; Capodiferro, Licia; Neri, Alessandro
2013-02-01
The description of space-time patches is a fundamental task in many applications such as video retrieval or classification. Each space-time patch can be described by using a set of orthogonal functions that represent a subspace, for example a sphere or a cylinder, within the patch. In this work, our aim is to investigate the differences between the spherical descriptors and the cylindrical descriptors. In order to compute the descriptors, the 3D spherical and cylindrical Zernike polynomials are employed. This is important because both the functions are based on the same family of polynomials, and only the symmetry is different. Our experimental results show that the cylindrical descriptor outperforms the spherical descriptor. However, the performances of the two descriptors are similar.
betaFIT: A computer program to fit pointwise potentials to selected analytic functions
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.; Pashov, Asen
2017-01-01
This paper describes program betaFIT, which performs least-squares fits of sets of one-dimensional (or radial) potential function values to four different types of sophisticated analytic potential energy functional forms. These families of potential energy functions are: the Expanded Morse Oscillator (EMO) potential [J Mol Spectrosc 1999;194:197], the Morse/Long-Range (MLR) potential [Mol Phys 2007;105:663], the Double Exponential/Long-Range (DELR) potential [J Chem Phys 2003;119:7398], and the "Generalized Potential Energy Function (GPEF)" form introduced by Šurkus et al. [Chem Phys Lett 1984;105:291], which includes a wide variety of polynomial potentials, such as the Dunham [Phys Rev 1932;41:713], Simons-Parr-Finlan [J Chem Phys 1973;59:3229], and Ogilvie-Tipping [Proc R Soc A 1991;378:287] polynomials, as special cases. This code will be useful for providing the realistic sets of potential function shape parameters that are required to initiate direct fits of selected analytic potential functions to experimental data, and for providing better analytical representations of sets of ab initio results.
NASA Astrophysics Data System (ADS)
Bilchenko, G. G.; Bilchenko, N. G.
2018-03-01
The hypersonic aircraft permeable surfaces heat and mass transfer effective control mathematical modeling problems are considered. The analysis of the control (the blowing) constructive and gasdynamical restrictions is carried out for the porous and perforated surfaces. The functions classes allowing realize the controls taking into account the arising types of restrictions are suggested. Estimates of the computational complexity of the W. G. Horner scheme application in the case of using the C. Hermite interpolation polynomial are given.
Non-polynomial closed string field theory: loops and conformal maps
NASA Astrophysics Data System (ADS)
Hua, Long; Kaku, Michio
1990-11-01
Recently, we proposed the complete classical action for the non-polynomial closed string field theory, which succesfully reproduced all closed string tree amplitudes. (The action was simultaneously proposed by the Kyoto group). In this paper, we analyze the structure of the theory. We (a) compute the explicit conformal map for all g-loop, p-puncture diagrams, (b) compute all one-loop, two-puncture maps in terms of hyper-elliptic functions, and (c) analyze their modular structure. We analyze, but do not resolve, the question of modular invariance.
Towards spinning Mellin amplitudes
NASA Astrophysics Data System (ADS)
Chen, Heng-Yu; Kuo, En-Jui; Kyono, Hideki
2018-06-01
We construct the Mellin representation of four point conformal correlation function with external primary operators with arbitrary integer spacetime spins, and obtain a natural proposal for spinning Mellin amplitudes. By restricting to the exchange of symmetric traceless primaries, we generalize the Mellin transform for scalar case to introduce discrete Mellin variables for incorporating spin degrees of freedom. Based on the structures about spinning three and four point Witten diagrams, we also obtain a generalization of the Mack polynomial which can be regarded as a natural kinematical polynomial basis for computing spinning Mellin amplitudes using different choices of interaction vertices.
A comparison of polynomial approximations and artificial neural nets as response surfaces
NASA Technical Reports Server (NTRS)
Carpenter, William C.; Barthelemy, Jean-Francois M.
1992-01-01
Artificial neural nets and polynomial approximations were used to develop response surfaces for several test problems. Based on the number of functional evaluations required to build the approximations and the number of undetermined parameters associated with the approximations, the performance of the two types of approximations was found to be comparable. A rule of thumb is developed for determining the number of nodes to be used on a hidden layer of an artificial neural net, and the number of designs needed to train an approximation is discussed.
Operational method of solution of linear non-integer ordinary and partial differential equations.
Zhukovsky, K V
2016-01-01
We propose operational method with recourse to generalized forms of orthogonal polynomials for solution of a variety of differential equations of mathematical physics. Operational definitions of generalized families of orthogonal polynomials are used in this context. Integral transforms and the operational exponent together with some special functions are also employed in the solutions. The examples of solution of physical problems, related to such problems as the heat propagation in various models, evolutional processes, Black-Scholes-like equations etc. are demonstrated by the operational technique.
Quantitative Boltzmann-Gibbs Principles via Orthogonal Polynomial Duality
NASA Astrophysics Data System (ADS)
Ayala, Mario; Carinci, Gioia; Redig, Frank
2018-06-01
We study fluctuation fields of orthogonal polynomials in the context of particle systems with duality. We thereby obtain a systematic orthogonal decomposition of the fluctuation fields of local functions, where the order of every term can be quantified. This implies a quantitative generalization of the Boltzmann-Gibbs principle. In the context of independent random walkers, we complete this program, including also fluctuation fields in non-stationary context (local equilibrium). For other interacting particle systems with duality such as the symmetric exclusion process, similar results can be obtained, under precise conditions on the n particle dynamics.
Stochastic Modeling of Flow-Structure Interactions using Generalized Polynomial Chaos
2001-09-11
Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc...scheme, which is represented as a tree structure in figure 1 (following [24]), classifies the hypergeometric orthogonal polynomials and indicates the...2F0(1) 2F0(0) Figure 1: The Askey scheme of orthogonal polynomials The orthogonal polynomials associated with the generalized polynomial chaos,
Hyperspectral recognition of processing tomato early blight based on GA and SVM
NASA Astrophysics Data System (ADS)
Yin, Xiaojun; Zhao, SiFeng
2013-03-01
Processing tomato early blight seriously affect the yield and quality of its.Determine the leaves spectrum of different disease severity level of processing tomato early blight.We take the sensitive bands of processing tomato early blight as support vector machine input vector.Through the genetic algorithm(GA) to optimize the parameters of SVM, We could recognize different disease severity level of processing tomato early blight.The result show:the sensitive bands of different disease severity levels of processing tomato early blight is 628-643nm and 689-692nm.The sensitive bands are as the GA and SVM input vector.We get the best penalty parameters is 0.129 and kernel function parameters is 3.479.We make classification training and testing by polynomial nuclear,radial basis function nuclear,Sigmoid nuclear.The best classification model is the radial basis function nuclear of SVM. Training accuracy is 84.615%,Testing accuracy is 80.681%.It is combined GA and SVM to achieve multi-classification of processing tomato early blight.It is provided the technical support of prediction processing tomato early blight occurrence, development and diffusion rule in large areas.
Three-dimensional trend mapping from wire-line logs
Doveton, J.H.; Ke-an, Z.
1985-01-01
Mapping of lithofacies and porosities of stratigraphic units is complicated because these properties vary in three dimensions. The method of moments was proposed by Krumbein and Libby (1957) as a technique to aid in resolving this problem. Moments are easily computed from wireline logs and are simple statistics which summarize vertical variation in a log trace. Combinations of moment maps have proved useful in understanding vertical and lateral changes in lithology of sedimentary rock units. Although moments have meaning both as statistical descriptors and as mechanical properties, they also define polynomial curves which approximate lithologic changes as a function of depth. These polynomials can be fitted by least-squares methods, partitioning major trends in rock properties from finescale fluctuations. Analysis of variance yields the degree of fit of any polynomial and measures the proportion of vertical variability expressed by any moment or combination of moments. In addition, polynomial curves can be differentiated to determine depths at which pronounced expressions of facies occur and to determine the locations of boundaries between major lithologic subdivisions. Moments can be estimated at any location in an area by interpolating from log moments at control wells. A matrix algebra operation then converts moment estimates to coefficients of a polynomial function which describes a continuous curve of lithologic variation with depth. If this procedure is applied to a grid of geographic locations, the result is a model of variability in three dimensions. Resolution of the model is determined largely by number of moments used in its generation. The method is illustrated with an analysis of lithofacies in the Simpson Group of south-central Kansas; the three-dimensional model is shown as cross sections and slice maps. In this study, the gamma-ray log is used as a measure of shaliness of the unit. However, the method is general and can be applied, for example, to suites of neutron, density, or sonic logs to produce three-dimensional models of porosity in reservoir rocks. ?? 1985 Plenum Publishing Corporation.
Noninvasive estimation of assist pressure for direct mechanical ventricular actuation
NASA Astrophysics Data System (ADS)
An, Dawei; Yang, Ming; Gu, Xiaotong; Meng, Fan; Yang, Tianyue; Lin, Shujing
2018-02-01
Direct mechanical ventricular actuation is effective to reestablish the ventricular function with non-blood contact. Due to the energy loss within the driveline of the direct cardiac compression device, it is necessary to acquire the accurate value of assist pressure acting on the heart surface. To avoid myocardial trauma induced by invasive sensors, the noninvasive estimation method is developed and the experimental device is designed to measure the sample data for fitting the estimation models. By examining the goodness of fit numerically and graphically, the polynomial model presents the best behavior among the four alternative models. Meanwhile, to verify the effect of the noninvasive estimation, the simplified lumped parameter model is utilized to calculate the pre-support and the post-support left ventricular pressure. Furthermore, by adjusting the driving pressure beyond the range of the sample data, the assist pressure is estimated with the similar waveform and the post-support left ventricular pressure approaches the value of the adult healthy heart, indicating the good generalization ability of the noninvasive estimation method.
Methods in Symbolic Computation and p-Adic Valuations of Polynomials
NASA Astrophysics Data System (ADS)
Guan, Xiao
Symbolic computation has widely appear in many mathematical fields such as combinatorics, number theory and stochastic processes. The techniques created in the area of experimental mathematics provide us efficient ways of symbolic computing and verification of complicated relations. Part I consists of three problems. The first one focuses on a unimodal sequence derived from a quartic integral. Many of its properties are explored with the help of hypergeometric representations and automatic proofs. The second problem tackles the generating function of the reciprocal of Catalan number. It springs from the closed form given by Mathematica. Furthermore, three methods in special functions are used to justify this result. The third issue addresses the closed form solutions for the moments of products of generalized elliptic integrals , which combines the experimental mathematics and classical analysis. Part II concentrates on the p-adic valuations of polynomials from the perspective of trees. For a given polynomial f( n) indexed in positive integers, the package developed in Mathematica will create certain tree structure following a couple of rules. The evolution of such trees are studied both rigorously and experimentally from the view of field extension, nonparametric statistics and random matrix.
Baldi, F; Alencar, M M; Albuquerque, L G
2010-12-01
The objective of this work was to estimate covariance functions using random regression models on B-splines functions of animal age, for weights from birth to adult age in Canchim cattle. Data comprised 49,011 records on 2435 females. The model of analysis included fixed effects of contemporary groups, age of dam as quadratic covariable and the population mean trend taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were modelled through a step function with four classes. The direct and maternal additive genetic effects, and animal and maternal permanent environmental effects were included as random effects in the model. A total of seventeen analyses, considering linear, quadratic and cubic B-splines functions and up to seven knots, were carried out. B-spline functions of the same order were considered for all random effects. Random regression models on B-splines functions were compared to a random regression model on Legendre polynomials and with a multitrait model. Results from different models of analyses were compared using the REML form of the Akaike Information criterion and Schwarz' Bayesian Information criterion. In addition, the variance components and genetic parameters estimated for each random regression model were also used as criteria to choose the most adequate model to describe the covariance structure of the data. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most adequate to describe the covariance structure of the data. Random regression models using B-spline functions as base functions fitted the data better than Legendre polynomials, especially at mature ages, but higher number of parameters need to be estimated with B-splines functions. © 2010 Blackwell Verlag GmbH.
An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan, E-mail: weixuan.li@usc.edu; Lin, Guang, E-mail: guang.lin@pnnl.gov; Zhang, Dongxiao, E-mail: dxz@pku.edu.cn
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functionsmore » is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Simulated quantum computation of molecular energies.
Aspuru-Guzik, Alán; Dutoi, Anthony D; Love, Peter J; Head-Gordon, Martin
2005-09-09
The calculation time for the energy of atoms and molecules scales exponentially with system size on a classical computer but polynomially using quantum algorithms. We demonstrate that such algorithms can be applied to problems of chemical interest using modest numbers of quantum bits. Calculations of the water and lithium hydride molecular ground-state energies have been carried out on a quantum computer simulator using a recursive phase-estimation algorithm. The recursive algorithm reduces the number of quantum bits required for the readout register from about 20 to 4. Mappings of the molecular wave function to the quantum bits are described. An adiabatic method for the preparation of a good approximate ground-state wave function is described and demonstrated for a stretched hydrogen molecule. The number of quantum bits required scales linearly with the number of basis functions, and the number of gates required grows polynomially with the number of quantum bits.
Moments of zeta functions associated to hyperelliptic curves over finite fields
Rubinstein, Michael O.; Wu, Kaiyu
2015-01-01
Let q be an odd prime power, and denote the set of square-free monic polynomials D(x)∈Fq[x] of degree d. Katz and Sarnak showed that the moments, over , of the zeta functions associated to the curves y2=D(x), evaluated at the central point, tend, as , to the moments of characteristic polynomials, evaluated at the central point, of matrices in USp(2⌊(d−1)/2⌋). Using techniques that were originally developed for studying moments of L-functions over number fields, Andrade and Keating conjectured an asymptotic formula for the moments for q fixed and . We provide theoretical and numerical evidence in favour of their conjecture. In some cases, we are able to work out exact formulae for the moments and use these to precisely determine the size of the remainder term in the predicted moments. PMID:25802418
Shekarchi, Sayedali; Hallam, John; Christensen-Dalsgaard, Jakob
2013-11-01
Head-related transfer functions (HRTFs) are generally large datasets, which can be an important constraint for embedded real-time applications. A method is proposed here to reduce redundancy and compress the datasets. In this method, HRTFs are first compressed by conversion into autoregressive-moving-average (ARMA) filters whose coefficients are calculated using Prony's method. Such filters are specified by a few coefficients which can generate the full head-related impulse responses (HRIRs). Next, Legendre polynomials (LPs) are used to compress the ARMA filter coefficients. LPs are derived on the sphere and form an orthonormal basis set for spherical functions. Higher-order LPs capture increasingly fine spatial details. The number of LPs needed to represent an HRTF, therefore, is indicative of its spatial complexity. The results indicate that compression ratios can exceed 98% while maintaining a spectral error of less than 4 dB in the recovered HRTFs.
NASA Astrophysics Data System (ADS)
Cao, Jin; Jiang, Zhibin; Wang, Kangzhou
2017-07-01
Many nonlinear customer satisfaction-related factors significantly influence the future customer demand for service-oriented manufacturing (SOM). To address this issue and enhance the prediction accuracy, this article develops a novel customer demand prediction approach for SOM. The approach combines the phase space reconstruction (PSR) technique with the optimized least square support vector machine (LSSVM). First, the prediction sample space is reconstructed by the PSR to enrich the time-series dynamics of the limited data sample. Then, the generalization and learning ability of the LSSVM are improved by the hybrid polynomial and radial basis function kernel. Finally, the key parameters of the LSSVM are optimized by the particle swarm optimization algorithm. In a real case study, the customer demand prediction of an air conditioner compressor is implemented. Furthermore, the effectiveness and validity of the proposed approach are demonstrated by comparison with other classical predication approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grandati, Y.; Quesne, C.
2013-07-15
The power of the disconjugacy properties of second-order differential equations of Schrödinger type to check the regularity of rationally extended quantum potentials connected with exceptional orthogonal polynomials is illustrated by re-examining the extensions of the isotonic oscillator (or radial oscillator) potential derived in kth-order supersymmetric quantum mechanics or multistep Darboux-Bäcklund transformation method. The function arising in the potential denominator is proved to be a polynomial with a nonvanishing constant term, whose value is calculated by induction over k. The sign of this term being the same as that of the already known highest degree term, the potential denominator has themore » same sign at both extremities of the definition interval, a property that is shared by the seed eigenfunction used in the potential construction. By virtue of disconjugacy, such a property implies the nodeless character of both the eigenfunction and the resulting potential.« less
Classical Dynamics of Fullerenes
NASA Astrophysics Data System (ADS)
Sławianowski, Jan J.; Kotowski, Romuald K.
2017-06-01
The classical mechanics of large molecules and fullerenes is studied. The approach is based on the model of collective motion of these objects. The mixed Lagrangian (material) and Eulerian (space) description of motion is used. In particular, the Green and Cauchy deformation tensors are geometrically defined. The important issue is the group-theoretical approach to describing the affine deformations of the body. The Hamiltonian description of motion based on the Poisson brackets methodology is used. The Lagrange and Hamilton approaches allow us to formulate the mechanics in the canonical form. The method of discretization in analytical continuum theory and in classical dynamics of large molecules and fullerenes enable us to formulate their dynamics in terms of the polynomial expansions of configurations. Another approach is based on the theory of analytical functions and on their approximations by finite-order polynomials. We concentrate on the extremely simplified model of affine deformations or on their higher-order polynomial perturbations.
An Exact Formula for Calculating Inverse Radial Lens Distortions
Drap, Pierre; Lefèvre, Julien
2016-01-01
This article presents a new approach to calculating the inverse of radial distortions. The method presented here provides a model of reverse radial distortion, currently modeled by a polynomial expression, that proposes another polynomial expression where the new coefficients are a function of the original ones. After describing the state of the art, the proposed method is developed. It is based on a formal calculus involving a power series used to deduce a recursive formula for the new coefficients. We present several implementations of this method and describe the experiments conducted to assess the validity of the new approach. Such an approach, non-iterative, using another polynomial expression, able to be deduced from the first one, can actually be interesting in terms of performance, reuse of existing software, or bridging between different existing software tools that do not consider distortion from the same point of view. PMID:27258288
A triangular property of the associated Legendre functions
NASA Technical Reports Server (NTRS)
Fineschi, S.; Landi Degl'innocenti, E.
1990-01-01
A mathematical formula is introduced and proved which relates the associated Legendre functions with given nonnegative integral indices. The application of this formula in simplifying the calculation of collisional electron-atom cross sections higher than the dipole is mentioned. A proof of the stated identity using the Gegenbauer polynomials and their generating function is given.
A contracting-interval program for the Danilewski method. Ph.D. Thesis - Va. Univ.
NASA Technical Reports Server (NTRS)
Harris, J. D.
1971-01-01
The concept of contracting-interval programs is applied to finding the eigenvalues of a matrix. The development is a three-step process in which (1) a program is developed for the reduction of a matrix to Hessenberg form, (2) a program is developed for the reduction of a Hessenberg matrix to colleague form, and (3) the characteristic polynomial with interval coefficients is readily obtained from the interval of colleague matrices. This interval polynomial is then factored into quadratic factors so that the eigenvalues may be obtained. To develop a contracting-interval program for factoring this polynomial with interval coefficients it is necessary to have an iteration method which converges even in the presence of controlled rounding errors. A theorem is stated giving sufficient conditions for the convergence of Newton's method when both the function and its Jacobian cannot be evaluated exactly but errors can be made proportional to the square of the norm of the difference between the previous two iterates. This theorem is applied to prove the convergence of the generalization of the Newton-Bairstow method that is used to obtain quadratic factors of the characteristic polynomial.
Integrand reduction for two-loop scattering amplitudes through multivariate polynomial division
NASA Astrophysics Data System (ADS)
Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano
2013-04-01
We describe the application of a novel approach for the reduction of scattering amplitudes, based on multivariate polynomial division, which we have recently presented. This technique yields the complete integrand decomposition for arbitrary amplitudes, regardless of the number of loops. It allows for the determination of the residue at any multiparticle cut, whose knowledge is a mandatory prerequisite for applying the integrand-reduction procedure. By using the division modulo Gröbner basis, we can derive a simple integrand recurrence relation that generates the multiparticle pole decomposition for integrands of arbitrary multiloop amplitudes. We apply the new reduction algorithm to the two-loop planar and nonplanar diagrams contributing to the five-point scattering amplitudes in N=4 super Yang-Mills and N=8 supergravity in four dimensions, whose numerator functions contain up to rank-two terms in the integration momenta. We determine all polynomial residues parametrizing the cuts of the corresponding topologies and subtopologies. We obtain the integral basis for the decomposition of each diagram from the polynomial form of the residues. Our approach is well suited for a seminumerical implementation, and its general mathematical properties provide an effective algorithm for the generalization of the integrand-reduction method to all orders in perturbation theory.
Sum-of-squares-based fuzzy controller design using quantum-inspired evolutionary algorithm
NASA Astrophysics Data System (ADS)
Yu, Gwo-Ruey; Huang, Yu-Chia; Cheng, Chih-Yung
2016-07-01
In the field of fuzzy control, control gains are obtained by solving stabilisation conditions in linear-matrix-inequality-based Takagi-Sugeno fuzzy control method and sum-of-squares-based polynomial fuzzy control method. However, the optimal performance requirements are not considered under those stabilisation conditions. In order to handle specific performance problems, this paper proposes a novel design procedure with regard to polynomial fuzzy controllers using quantum-inspired evolutionary algorithms. The first contribution of this paper is a combination of polynomial fuzzy control and quantum-inspired evolutionary algorithms to undertake an optimal performance controller design. The second contribution is the proposed stability condition derived from the polynomial Lyapunov function. The proposed design approach is dissimilar to the traditional approach, in which control gains are obtained by solving the stabilisation conditions. The first step of the controller design uses the quantum-inspired evolutionary algorithms to determine the control gains with the best performance. Then, the stability of the closed-loop system is analysed under the proposed stability conditions. To illustrate effectiveness and validity, the problem of balancing and the up-swing of an inverted pendulum on a cart is used.
NASA Astrophysics Data System (ADS)
Li, Shaoxin; Zhang, Yanjiao; Xu, Junfa; Li, Linfang; Zeng, Qiuyao; Lin, Lin; Guo, Zhouyi; Liu, Zhiming; Xiong, Honglian; Liu, Songhao
2014-09-01
This study aims to present a noninvasive prostate cancer screening methods using serum surface-enhanced Raman scattering (SERS) and support vector machine (SVM) techniques through peripheral blood sample. SERS measurements are performed using serum samples from 93 prostate cancer patients and 68 healthy volunteers by silver nanoparticles. Three types of kernel functions including linear, polynomial, and Gaussian radial basis function (RBF) are employed to build SVM diagnostic models for classifying measured SERS spectra. For comparably evaluating the performance of SVM classification models, the standard multivariate statistic analysis method of principal component analysis (PCA) is also applied to classify the same datasets. The study results show that for the RBF kernel SVM diagnostic model, the diagnostic accuracy of 98.1% is acquired, which is superior to the results of 91.3% obtained from PCA methods. The receiver operating characteristic curve of diagnostic models further confirm above research results. This study demonstrates that label-free serum SERS analysis technique combined with SVM diagnostic algorithm has great potential for noninvasive prostate cancer screening.
NASA Astrophysics Data System (ADS)
Malekan, Mohammad; Barros, Felicio Bruzzi
2016-11-01
Using the locally-enriched strategy to enrich a small/local part of the problem by generalized/extended finite element method (G/XFEM) leads to non-optimal convergence rate and ill-conditioning system of equations due to presence of blending elements. The local enrichment can be chosen from polynomial, singular, branch or numerical types. The so-called stable version of G/XFEM method provides a well-conditioning approach when only singular functions are used in the blending elements. This paper combines numeric enrichment functions obtained from global-local G/XFEM method with the polynomial enrichment along with a well-conditioning approach, stable G/XFEM, in order to show the robustness and effectiveness of the approach. In global-local G/XFEM, the enrichment functions are constructed numerically from the solution of a local problem. Furthermore, several enrichment strategies are adopted along with the global-local enrichment. The results obtained with these enrichments strategies are discussed in detail, considering convergence rate in strain energy, growth rate of condition number, and computational processing. Numerical experiments show that using geometrical enrichment along with stable G/XFEM for global-local strategy improves the convergence rate and the conditioning of the problem. In addition, results shows that using polynomial enrichment for global problem simultaneously with global-local enrichments lead to ill-conditioned system matrices and bad convergence rate.
Computational aspects of pseudospectral Laguerre approximations
NASA Technical Reports Server (NTRS)
Funaro, Daniele
1989-01-01
Pseudospectral approximations in unbounded domains by Laguerre polynomials lead to ill-conditioned algorithms. Introduced are a scaling function and appropriate numerical procedures in order to limit these unpleasant phenomena.
Analysis of bonded joints. [shear stress and stress-strain diagrams
NASA Technical Reports Server (NTRS)
Srinivas, S.
1975-01-01
A refined elastic analysis of bonded joints which accounts for transverse shear deformation and transverse normal stress was developed to obtain the stresses and displacements in the adherends and in the bond. The displacements were expanded in terms of polynomials in the thicknesswise coordinate; the coefficients of these polynomials were functions of the axial coordinate. The stress distribution was obtained in terms of these coefficients by using strain-displacement and stress-strain relations. The governing differential equations were obtained by integrating the equations of equilibrium, and were solved. The boundary conditions (interface or support) were satisfied to complete the analysis. Single-lap, flush, and double-lap joints were analyzed, along with the effects of adhesive properties, plate thicknesses, material properties, and plate taper on maximum peel and shear stresses in the bond. The results obtained by using the thin-beam analysis available in the literature were compared with the results obtained by using the refined analysis. In general, thin-beam analysis yielded reasonably accurate results, but in certain cases the errors were high. Numerical investigations showed that the maximum peel and shear stresses in the bond can be reduced by (1) using a combination of flexible and stiff bonds, (2) using stiffer lap plates, and (3) tapering the plates.
Determination of the expansion of the potential of the earth's normal gravitational field
NASA Astrophysics Data System (ADS)
Kochiev, A. A.
The potential of the generalized problem of 2N fixed centers is expanded in a polynomial and Legendre function series. Formulas are derived for the expansion coefficients, and the disturbing function of the problem is constructed in an explicit form.
Expressions for Fields in the ITER Tokamak
NASA Astrophysics Data System (ADS)
Sharma, Stephen
2017-10-01
The two most important problems to be solved in the development of working nuclear fusion power plants are: sustained partial ignition and turbulence. These two phenomenon are the subject of research and investigation through the development of analytic functions and computational models. Ansatz development through Gaussian wave-function approximations, dielectric quark models, field solutions using new elliptic functions, and better descriptions of the polynomials of the superconducting current loops are the critical theoretical developments that need to be improved. Euler-Lagrange equations of motion in addition to geodesic formulations generate the particle model which should correspond to the Dirac dispersive scattering coefficient calculations and the fluid plasma model. Feynman-Hellman formalism and Heaviside step functional forms are introduced to the fusion equations to produce simple expressions for the kinetic energy and loop currents. Conclusively, a polynomial description of the current loops, the Biot-Savart field, and the Lagrangian must be uncovered before there can be an adequate computational and iterative model of the thermonuclear plasma.
Orthogonal polynomial projectors for the Projector Augmented Wave (PAW) formalism.
NASA Astrophysics Data System (ADS)
Holzwarth, N. A. W.; Matthews, G. E.; Tackett, A. R.; Dunning, R. B.
1998-03-01
The PAW method for density functional electronic structure calculations developed by Blöchl(Phys. Rev. B 50), 17953 (1994) and also used by our group(Phys. Rev. B 55), 2005 (1997) has numerical advantages of a pseudopotential technique while retaining the physics of an all-electron formalism. We describe a new method for generating the necessary set of atom-centered projector and basis functions, based on choosing the projector functions from a set of orthogonal polynomials multiplied by a localizing weight factor. Numerical benefits of the new scheme result from having direct control of the shape of the projector functions and from the use of a simple repulsive local potential term to eliminate ``ghost state" problems, which can haunt calculations of this kind. We demonstrate the method by calculating the cohesive energies of CaF2 and Mo and the density of states of CaMoO4 which shows detailed agreement with LAPW results over a 66 eV range of energy including upper core, valence, and conduction band states.
Thermodynamic characterization of networks using graph polynomials
NASA Astrophysics Data System (ADS)
Ye, Cheng; Comin, César H.; Peron, Thomas K. DM.; Silva, Filipi N.; Rodrigues, Francisco A.; Costa, Luciano da F.; Torsello, Andrea; Hancock, Edwin R.
2015-09-01
In this paper, we present a method for characterizing the evolution of time-varying complex networks by adopting a thermodynamic representation of network structure computed from a polynomial (or algebraic) characterization of graph structure. Commencing from a representation of graph structure based on a characteristic polynomial computed from the normalized Laplacian matrix, we show how the polynomial is linked to the Boltzmann partition function of a network. This allows us to compute a number of thermodynamic quantities for the network, including the average energy and entropy. Assuming that the system does not change volume, we can also compute the temperature, defined as the rate of change of entropy with energy. All three thermodynamic variables can be approximated using low-order Taylor series that can be computed using the traces of powers of the Laplacian matrix, avoiding explicit computation of the normalized Laplacian spectrum. These polynomial approximations allow a smoothed representation of the evolution of networks to be constructed in the thermodynamic space spanned by entropy, energy, and temperature. We show how these thermodynamic variables can be computed in terms of simple network characteristics, e.g., the total number of nodes and node degree statistics for nodes connected by edges. We apply the resulting thermodynamic characterization to real-world time-varying networks representing complex systems in the financial and biological domains. The study demonstrates that the method provides an efficient tool for detecting abrupt changes and characterizing different stages in network evolution.
Spatiotemporal accessible solitons in fractional dimensions.
Zhong, Wei-Ping; Belić, Milivoj R; Malomed, Boris A; Zhang, Yiqi; Huang, Tingwen
2016-07-01
We report solutions for solitons of the "accessible" type in globally nonlocal nonlinear media of fractional dimension (FD), viz., for self-trapped modes in the space of effective dimension 2
Gauss-Manin Connection in Disguise: Calabi-Yau Threefolds
NASA Astrophysics Data System (ADS)
Alim, Murad; Movasati, Hossein; Scheidegger, Emanuel; Yau, Shing-Tung
2016-06-01
We describe a Lie Algebra on the moduli space of non-rigid compact Calabi-Yau threefolds enhanced with differential forms and its relation to the Bershadsky-Cecotti-Ooguri-Vafa holomorphic anomaly equation. In particular, we describe algebraic topological string partition functions {{F}g^alg, g ≥ 1}, which encode the polynomial structure of holomorphic and non-holomorphic topological string partition functions. Our approach is based on Grothendieck's algebraic de Rham cohomology and on the algebraic Gauss-Manin connection. In this way, we recover a result of Yamaguchi-Yau and Alim-Länge in an algebraic context. Our proofs use the fact that the special polynomial generators defined using the special geometry of deformation spaces of Calabi-Yau threefolds correspond to coordinates on such a moduli space. We discuss the mirror quintic as an example.
Karajan, N; Otto, D; Oladyshkin, S; Ehlers, W
2014-10-01
A possibility to simulate the mechanical behaviour of the human spine is given by modelling the stiffer structures, i.e. the vertebrae, as a discrete multi-body system (MBS), whereas the softer connecting tissue, i.e. the softer intervertebral discs (IVD), is represented in a continuum-mechanical sense using the finite-element method (FEM). From a modelling point of view, the mechanical behaviour of the IVD can be included into the MBS in two different ways. They can either be computed online in a so-called co-simulation of a MBS and a FEM or offline in a pre-computation step, where a representation of the discrete mechanical response of the IVD needs to be defined in terms of the applied degrees of freedom (DOF) of the MBS. For both methods, an appropriate homogenisation step needs to be applied to obtain the discrete mechanical response of the IVD, i.e. the resulting forces and moments. The goal of this paper was to present an efficient method to approximate the mechanical response of an IVD in an offline computation. In a previous paper (Karajan et al. in Biomech Model Mechanobiol 12(3):453-466, 2012), it was proven that a cubic polynomial for the homogenised forces and moments of the FE model is a suitable choice to approximate the purely elastic response as a coupled function of the DOF of the MBS. In this contribution, the polynomial chaos expansion (PCE) is applied to generate these high-dimensional polynomials. Following this, the main challenge is to determine suitable deformation states of the IVD for pre-computation, such that the polynomials can be constructed with high accuracy and low numerical cost. For the sake of a simple verification, the coupling method and the PCE are applied to the same simplified motion segment of the spine as was used in the previous paper, i.e. two cylindrical vertebrae and a cylindrical IVD in between. In a next step, the loading rates are included as variables in the polynomial response functions to account for a more realistic response of the overall viscoelastic intervertebral disc. Herein, an additive split into elastic and inelastic contributions to the homogenised forces and moments is applied.
A multi-label learning based kernel automatic recommendation method for support vector machine.
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.
A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896
NASA Astrophysics Data System (ADS)
Xu, Lili; Luo, Shuqian
2010-11-01
Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.
Xu, Lili; Luo, Shuqian
2010-01-01
Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.
NASA Technical Reports Server (NTRS)
Weisskopf, M. C.; Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.
2010-01-01
We present a progress report on the various endeavors we are undertaking at MSFC in support of the Wide Field X-Ray Telescope development. In particular we discuss assembly and alignment techniques, in-situ polishing corrections, and the results of our efforts to optimize mirror prescriptions including polynomial coefficients, relative shell displacements, detector placements and tilts. This optimization does not require a blind search through the multi-dimensional parameter space. Under the assumption that the parameters are small enough so that second order expansions are valid, we show that the performance at the detector can be expressed as a quadratic function with numerical coefficients derived from a ray trace through the underlying Wolter I optic. The optimal values for the parameters are found by solving the linear system of equations creating by setting derivatives of this function with respect to each parameter to zero.
Radial Basis Function Based Quadrature over Smooth Surfaces
2016-03-24
Radial Basis Functions φ(r) Piecewise Smooth (Conditionally Positive Definite) MN Monomial |r|2m+1 TPS thin plate spline |r|2mln|r| Infinitely Smooth...smooth surfaces using polynomial interpolants, while [27] couples Thin - Plate Spline interpolation (see table 1) with Green’s integral formula [29
A continuous function model for path prediction of entities
NASA Astrophysics Data System (ADS)
Nanda, S.; Pray, R.
2007-04-01
As militaries across the world continue to evolve, the roles of humans in various theatres of operation are being increasingly targeted by military planners for substitution with automation. Forward observation and direction of supporting arms to neutralize threats from dynamic adversaries is one such example. However, contemporary tracking and targeting systems are incapable of serving autonomously for they do not embody the sophisticated algorithms necessary to predict the future positions of adversaries with the accuracy offered by the cognitive and analytical abilities of human operators. The need for these systems to incorporate methods characterizing such intelligence is therefore compelling. In this paper, we present a novel technique to achieve this goal by modeling the path of an entity as a continuous polynomial function of multiple variables expressed as a Taylor series with a finite number of terms. We demonstrate the method for evaluating the coefficient of each term to define this function unambiguously for any given entity, and illustrate its use to determine the entity's position at any point in time in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muralidhar, K Raja; Komanduri, K
2014-06-01
Purpose: The objective of this work is to present a mechanism for calculating inflection points on profiles at various depths and field sizes and also a significant study on the percentage of doses at the inflection points for various field sizes and depths for 6XFFF and 10XFFF energy profiles. Methods: Graphical representation was done on Percentage of dose versus Inflection points. Also using the polynomial function, the authors formulated equations for calculating spot-on inflection point on the profiles for 6X FFF and 10X FFF energies for all field sizes and at various depths. Results: In a flattening filter free radiationmore » beam which is not like in Flattened beams, the dose at inflection point of the profile decreases as field size increases for 10XFFF. Whereas in 6XFFF, the dose at the inflection point initially increases up to 10x10cm2 and then decreases. The polynomial function was fitted for both FFF beams for all field sizes and depths. For small fields less than 5x5 cm2 the inflection point and FWHM are almost same and hence analysis can be done just like in FF beams. A change in 10% of dose can change the field width by 1mm. Conclusion: The present study, Derivative of equations based on the polynomial equation to define inflection point concept is precise and accurate way to derive the inflection point dose on any FFF beam profile at any depth with less than 1% accuracy. Corrections can be done in future studies based on the multiple number of machine data. Also a brief study was done to evaluate the inflection point positions with respect to dose in FFF energies for various field sizes and depths for 6XFFF and 10XFFF energy profiles.« less
Yu, Yi-Kuo
2003-08-15
The exact analytical result for a class of integrals involving (associated) Legendre polynomials of complicated argument is presented. The method employed can in principle be generalized to integrals involving other special functions. This class of integrals also proves useful in the electrostatic problems in which dielectric spheres are involved, which is of importance in modeling the dynamics of biological macromolecules. In fact, with this solution, a more robust foundation is laid for the Generalized Born method in modeling the dynamics of biomolecules. c2003 Elsevier B.V. All rights reserved.
An asymptotic formula for polynomials orthonormal with respect to a varying weight. II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Komlov, A V; Suetin, S P
2014-09-30
This paper gives a proof of the theorem announced by the authors in the preceding paper with the same title. The theorem states that asymptotically the behaviour of the polynomials which are orthonormal with respect to the varying weight e{sup −2nQ(x)}p{sub g}(x)/√(∏{sub j=1}{sup 2p}(x−e{sub j})) coincides with the asymptotic behaviour of the Nuttall psi-function, which solves a special boundary-value problem on the relevant hyperelliptic Riemann surface of genus g=p−1. Here e{sub 1}
Enhancing sparsity of Hermite polynomial expansions by iterative rotations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiu; Lei, Huan; Baker, Nathan A.
2016-02-01
Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.
USING THE HERMITE POLYNOMIALS IN RADIOLOGICAL MONITORING NETWORKS.
Benito, G; Sáez, J C; Blázquez, J B; Quiñones, J
2018-03-15
The most interesting events in Radiological Monitoring Network correspond to higher values of H*(10). The higher doses cause skewness in the probability density function (PDF) of the records, which there are not Gaussian anymore. Within this work the probability of having a dose >2 standard deviations is proposed as surveillance of higher doses. Such probability is estimated by using the Hermite polynomials for reconstructing the PDF. The result is that the probability is ~6 ± 1%, much >2.5% corresponding to Gaussian PDFs, which may be of interest in the design of alarm level for higher doses.
Certain approximation problems for functions on the infinite-dimensional torus: Lipschitz spaces
NASA Astrophysics Data System (ADS)
Platonov, S. S.
2018-02-01
We consider some questions about the approximation of functions on the infinite-dimensional torus by trigonometric polynomials. Our main results are analogues of the direct and inverse theorems in the classical theory of approximation of periodic functions and a description of the Lipschitz spaces on the infinite-dimensional torus in terms of the best approximation.
Sobolev-orthogonal systems of functions associated with an orthogonal system
NASA Astrophysics Data System (ADS)
Sharapudinov, I. I.
2018-02-01
For every system of functions \\{\\varphi_k(x)\\} which is orthonormal on (a,b) with weight ρ(x) and every positive integer r we construct a new associated system of functions \\{\\varphir,k(x)\\}k=0^∞ which is orthonormal with respect to a Sobolev-type inner product of the form \\displaystyle < f,g >=\\sumν=0r-1f(ν)(a)g(ν)(a)+\\intab f(r)(t)g(r)(t)ρ(t) dt. We study the convergence of Fourier series in the systems \\{\\varphir,k(x)\\}k=0^∞. In the important particular cases of such systems generated by the Haar functions and the Chebyshev polynomials T_n(x)=\\cos(n\\arccos x), we obtain explicit representations for the \\varphir,k(x) that can be used to study their asymptotic properties as k\\to∞ and the approximation properties of Fourier sums in the system \\{\\varphir,k(x)\\}k=0^∞. Special attention is paid to the study of approximation properties of Fourier series in systems of type \\{\\varphir,k(x)\\}k=0^∞ generated by Haar functions and Chebyshev polynomials.
The Baker-Akhiezer Function and Factorization of the Chebotarev-Khrapkov Matrix
NASA Astrophysics Data System (ADS)
Antipov, Yuri A.
2014-10-01
A new technique is proposed for the solution of the Riemann-Hilbert problem with the Chebotarev-Khrapkov matrix coefficient {G(t) = α1(t)I + α2(t)Q(t)} , {α1(t), α2(t) in H(L)} , I = diag{1, 1}, Q(t) is a {2×2} zero-trace polynomial matrix. This problem has numerous applications in elasticity and diffraction theory. The main feature of the method is the removal of essential singularities of the solution to the associated homogeneous scalar Riemann-Hilbert problem on the hyperelliptic surface of an algebraic function by means of the Baker-Akhiezer function. The consequent application of this function for the derivation of the general solution to the vector Riemann-Hilbert problem requires the finding of the {ρ} zeros of the Baker-Akhiezer function ({ρ} is the genus of the surface). These zeros are recovered through the solution to the associated Jacobi problem of inversion of abelian integrals or, equivalently, the determination of the zeros of the associated degree-{ρ} polynomial and solution of a certain linear algebraic system of {ρ} equations.
NASA Astrophysics Data System (ADS)
Tognetti, Eduardo S.; Oliveira, Ricardo C. L. F.; Peres, Pedro L. D.
2015-01-01
The problem of state feedback control design for discrete-time Takagi-Sugeno (TS) (T-S) fuzzy systems is investigated in this paper. A Lyapunov function, which is quadratic in the state and presents a multi-polynomial dependence on the fuzzy weighting functions at the current and past instants of time, is proposed.This function contains, as particular cases, other previous Lyapunov functions already used in the literature, being able to provide less conservative conditions of control design for TS fuzzy systems. The structure of the proposed Lyapunov function also motivates the design of a new stabilising compensator for Takagi-Sugeno fuzzy systems. The main novelty of the proposed state feedback control law is that the gain is composed of matrices with multi-polynomial dependence on the fuzzy weighting functions at a set of past instants of time, including the current one. The conditions for the existence of a stabilising state feedback control law that minimises an upper bound to the ? or ? norms are given in terms of linear matrix inequalities. Numerical examples show that the approach can be less conservative and more efficient than other methods available in the literature.
Integration of CAI into a Freshmen Liberal Arts Math Course in the Community College.
ERIC Educational Resources Information Center
McCall, Michael B.; Holton, Jean L.
1982-01-01
Discusses four computer-assisted-instruction programs used in a college-level mathematics course to introduce computer literacy and improve mathematical skills. The BASIC programs include polynomial functions, trigonometric functions, matrix algebra, and differential calculus. Each program discusses mathematics theory and introduces programming…
Macintosh II based space Telemetry and Command (MacTac) system
NASA Technical Reports Server (NTRS)
Dominy, Carol T.; Chesney, James R.; Collins, Aaron S.; Kay, W. K.
1991-01-01
The general architecture and the principal functions of the Macintosh II based Telemetry and Command system, presently under development, are described, with attention given to custom telemetry cards, input/output interfaces, and the icon driven user interface. The MacTac is a low-cost, transportable, easy to use, compact system designed to meet the requirements specified by the Consultative Committeee for Space Data Systems while remaining flexible enough to support a wide variety of other user specific telemetry processing requirements, such as TDM data. In addition, the MacTac can accept or generate forward data (such as spacecraft commands), calculate and append a Polynomial Check Code, and output these data to NASCOM to provide full Telemetry and Command capability.
Dillon, Paul; Phillips, L Alison; Gallagher, Paul; Smith, Susan M; Stewart, Derek; Cousins, Gráinne
2018-02-05
The Necessity-Concerns Framework (NCF) is a multidimensional theory describing the relationship between patients' positive and negative evaluations of their medication which interplay to influence adherence. Most studies evaluating the NCF have failed to account for the multidimensional nature of the theory, placing the separate dimensions of medication "necessity beliefs" and "concerns" onto a single dimension (e.g., the Beliefs about Medicines Questionnaire-difference score model). To assess the multidimensional effect of patient medication beliefs (concerns and necessity beliefs) on medication adherence using polynomial regression with response surface analysis. Community-dwelling older adults >65 years (n = 1,211) presenting their own prescription for antihypertensive medication to 106 community pharmacies in the Republic of Ireland rated their concerns and necessity beliefs to antihypertensive medications at baseline and their adherence to antihypertensive medication at 12 months via structured telephone interview. Confirmatory polynomial regression found the difference-score model to be inaccurate; subsequent exploratory analysis identified a quadratic model to be the best-fitting polynomial model. Adherence was lowest among those with strong medication concerns and weak necessity beliefs, and adherence was greatest for those with weak concerns and strong necessity beliefs (slope β = -0.77, p<.001; curvature β = -0.26, p = .004). However, novel nonreciprocal effects were also observed; patients with simultaneously high concerns and necessity beliefs had lower adherence than those with simultaneously low concerns and necessity beliefs (slope β = -0.36, p = .004; curvature β = -0.25, p = .003). The difference-score model fails to account for the potential nonreciprocal effects. Results extend evidence supporting the use of polynomial regression to assess the multidimensional effect of medication beliefs on adherence.
Correlation between external and internal respiratory motion: a validation study.
Ernst, Floris; Bruder, Ralf; Schlaefer, Alexander; Schweikard, Achim
2012-05-01
In motion-compensated image-guided radiotherapy, accurate tracking of the target region is required. This tracking process includes building a correlation model between external surrogate motion and the motion of the target region. A novel correlation method is presented and compared with the commonly used polynomial model. The CyberKnife system (Accuray, Inc., Sunnyvale/CA) uses a polynomial correlation model to relate externally measured surrogate data (optical fibres on the patient's chest emitting red light) to infrequently acquired internal measurements (X-ray data). A new correlation algorithm based on ɛ -Support Vector Regression (SVR) was developed. Validation and comparison testing were done with human volunteers using live 3D ultrasound and externally measured infrared light-emitting diodes (IR LEDs). Seven data sets (5:03-6:27 min long) were recorded from six volunteers. Polynomial correlation algorithms were compared to the SVR-based algorithm demonstrating an average increase in root mean square (RMS) accuracy of 21.3% (0.4 mm). For three signals, the increase was more than 29% and for one signal as much as 45.6% (corresponding to more than 1.5 mm RMS). Further analysis showed the improvement to be statistically significant. The new SVR-based correlation method outperforms traditional polynomial correlation methods for motion tracking. This method is suitable for clinical implementation and may improve the overall accuracy of targeted radiotherapy.
Equivalences of the multi-indexed orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odake, Satoru
2014-01-15
Multi-indexed orthogonal polynomials describe eigenfunctions of exactly solvable shape-invariant quantum mechanical systems in one dimension obtained by the method of virtual states deletion. Multi-indexed orthogonal polynomials are labeled by a set of degrees of polynomial parts of virtual state wavefunctions. For multi-indexed orthogonal polynomials of Laguerre, Jacobi, Wilson, and Askey-Wilson types, two different index sets may give equivalent multi-indexed orthogonal polynomials. We clarify these equivalences. Multi-indexed orthogonal polynomials with both type I and II indices are proportional to those of type I indices only (or type II indices only) with shifted parameters.
Element Library for Three-Dimensional Stress Analysis by the Integrated Force Method
NASA Technical Reports Server (NTRS)
Kaljevic, Igor; Patnaik, Surya N.; Hopkins, Dale A.
1996-01-01
The Integrated Force Method, a recently developed method for analyzing structures, is extended in this paper to three-dimensional structural analysis. First, a general formulation is developed to generate the stress interpolation matrix in terms of complete polynomials of the required order. The formulation is based on definitions of the stress tensor components in term of stress functions. The stress functions are written as complete polynomials and substituted into expressions for stress components. Then elimination of the dependent coefficients leaves the stress components expressed as complete polynomials whose coefficients are defined as generalized independent forces. Such derived components of the stress tensor identically satisfy homogenous Navier equations of equilibrium. The resulting element matrices are invariant with respect to coordinate transformation and are free of spurious zero-energy modes. The formulation provides a rational way to calculate the exact number of independent forces necessary to arrive at an approximation of the required order for complete polynomials. The influence of reducing the number of independent forces on the accuracy of the response is also analyzed. The stress fields derived are used to develop a comprehensive finite element library for three-dimensional structural analysis by the Integrated Force Method. Both tetrahedral- and hexahedral-shaped elements capable of modeling arbitrary geometric configurations are developed. A number of examples with known analytical solutions are solved by using the developments presented herein. The results are in good agreement with the analytical solutions. The responses obtained with the Integrated Force Method are also compared with those generated by the standard displacement method. In most cases, the performance of the Integrated Force Method is better overall.
Simulation of aspheric tolerance with polynomial fitting
NASA Astrophysics Data System (ADS)
Li, Jing; Cen, Zhaofeng; Li, Xiaotong
2018-01-01
The shape of the aspheric lens changes caused by machining errors, resulting in a change in the optical transfer function, which affects the image quality. At present, there is no universally recognized tolerance criterion standard for aspheric surface. To study the influence of aspheric tolerances on the optical transfer function, the tolerances of polynomial fitting are allocated on the aspheric surface, and the imaging simulation is carried out by optical imaging software. Analysis is based on a set of aspheric imaging system. The error is generated in the range of a certain PV value, and expressed as a form of Zernike polynomial, which is added to the aspheric surface as a tolerance term. Through optical software analysis, the MTF of optical system can be obtained and used as the main evaluation index. Evaluate whether the effect of the added error on the MTF of the system meets the requirements of the current PV value. Change the PV value and repeat the operation until the acceptable maximum allowable PV value is obtained. According to the actual processing technology, consider the error of various shapes, such as M type, W type, random type error. The new method will provide a certain development for the actual free surface processing technology the reference value.
NASA Technical Reports Server (NTRS)
Merz, A. W.; Hague, D. S.
1975-01-01
An investigation was conducted on a CDC 7600 digital computer to determine the effects of additional thickness distributions to the upper surface of the NACA 64-206 and 64 sub 1 - 212 airfoils. The additional thickness distribution had the form of a continuous mathematical function which disappears at both the leading edge and the trailing edge. The function behaves as a polynomial of order epsilon sub 1 at the leading edge, and a polynomial of order epsilon sub 2 at the trailing edge. Epsilon sub 2 is a constant and epsilon sub 1 is varied over a range of practical interest. The magnitude of the additional thickness, y, is a second input parameter, and the effect of varying epsilon sub 1 and y on the aerodynamic performance of the airfoil was investigated. Results were obtained at a Mach number of 0.2 with an angle-of-attack of 6 degrees on the basic airfoils, and all calculations employ the full potential flow equations for two dimensional flow. The relaxation method of Jameson was employed for solution of the potential flow equations.
Process-driven inference of biological network structure: feasibility, minimality, and multiplicity
NASA Astrophysics Data System (ADS)
Zeng, Chen
2012-02-01
For a given dynamic process, identifying the putative interaction networks to achieve it is the inference problem. In this talk, we address the computational complexity of inference problem in the context of Boolean networks under dominant inhibition condition. The first is a proof that the feasibility problem (is there a network that explains the dynamics?) can be solved in polynomial-time. Second, while the minimality problem (what is the smallest network that explains the dynamics?) is shown to be NP-hard, a simple polynomial-time heuristic is shown to produce near-minimal solutions, as demonstrated by simulation. Third, the theoretical framework also leads to a fast polynomial-time heuristic to estimate the number of network solutions with reasonable accuracy. We will apply these approaches to two simplified Boolean network models for the cell cycle process of budding yeast (Li 2004) and fission yeast (Davidich 2008). Our results demonstrate that each of these networks contains a giant backbone motif spanning all the network nodes that provides the desired main functionality, while the remaining edges in the network form smaller motifs whose role is to confer stability properties rather than provide function. Moreover, we show that the bioprocesses of these two cell cycle models differ considerably from a typically generated process and are intrinsically cascade-like.
Calculation of Thermal Conductivity Coefficients of Electrons in Magnetized Dense Matter
NASA Astrophysics Data System (ADS)
Bisnovatyi-Kogan, G. S.; Glushikhina, M. V.
2018-04-01
The solution of Boltzmann equation for plasma in magnetic field with arbitrarily degenerate electrons and nondegenerate nuclei is obtained by Chapman-Enskog method. Functions generalizing Sonine polynomials are used for obtaining an approximate solution. Fully ionized plasma is considered. The tensor of the heat conductivity coefficients in nonquantized magnetic field is calculated. For nondegenerate and strongly degenerate plasma the asymptotic analytic formulas are obtained and compared with results of previous authors. The Lorentz approximation with neglecting of electron-electron encounters is asymptotically exact for strongly degenerate plasma. For the first time, analytical expressions for the heat conductivity tensor for nondegenerate electrons in the presence of a magnetic field are obtained in the three-polynomial approximation with account of electron-electron collisions. Account of the third polynomial improved substantially the precision of results. In the two-polynomial approximation, the obtained solution coincides with the published results. For strongly degenerate electrons, an asymptotically exact analytical solution for the heat conductivity tensor in the presence of a magnetic field is obtained for the first time. This solution has a considerably more complicated dependence on the magnetic field than those in previous publications and gives a several times smaller relative value of the thermal conductivity across the magnetic field at ωτ * 0.8.
Bignardi, A B; El Faro, L; Torres Júnior, R A A; Cardoso, V L; Machado, P F; Albuquerque, L G
2011-10-31
We analyzed 152,145 test-day records from 7317 first lactations of Holstein cows recorded from 1995 to 2003. Our objective was to model variations in test-day milk yield during the first lactation of Holstein cows by random regression model (RRM), using various functions in order to obtain adequate and parsimonious models for the estimation of genetic parameters. Test-day milk yields were grouped into weekly classes of days in milk, ranging from 1 to 44 weeks. The contemporary groups were defined as herd-test-day. The analyses were performed using a single-trait RRM, including the direct additive, permanent environmental and residual random effects. In addition, contemporary group and linear and quadratic effects of the age of cow at calving were included as fixed effects. The mean trend of milk yield was modeled with a fourth-order orthogonal Legendre polynomial. The additive genetic and permanent environmental covariance functions were estimated by random regression on two parametric functions, Ali and Schaeffer and Wilmink, and on B-spline functions of days in milk. The covariance components and the genetic parameters were estimated by the restricted maximum likelihood method. Results from RRM parametric and B-spline functions were compared to RRM on Legendre polynomials and with a multi-trait analysis, using the same data set. Heritability estimates presented similar trends during mid-lactation (13 to 31 weeks) and between week 37 and the end of lactation, for all RRM. Heritabilities obtained by multi-trait analysis were of a lower magnitude than those estimated by RRM. The RRMs with a higher number of parameters were more useful to describe the genetic variation of test-day milk yield throughout the lactation. RRM using B-spline and Legendre polynomials as base functions appears to be the most adequate to describe the covariance structure of the data.
Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.
2014-01-01
Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358
NASA Astrophysics Data System (ADS)
Castagnède, Bernard; Jenkins, James T.; Sachse, Wolfgang; Baste, Stéphane
1990-03-01
A method is described to optimally determine the elastic constants of anisotropic solids from wave-speeds measurements in arbitrary nonprincipal planes. For such a problem, the characteristic equation is a degree-three polynomial which generally does not factorize. By developing and rearranging this polynomial, a nonlinear system of equations is obtained. The elastic constants are then recovered by minimizing a functional derived from this overdetermined system of equations. Calculations of the functional are given for two specific cases, i.e., the orthorhombic and the hexagonal symmetries. Some numerical results showing the efficiency of the algorithm are presented. A numerical method is also described for the recovery of the orientation of the principal acoustical axes. This problem is solved through a double-iterative numerical scheme. Numerical as well as experimental results are presented for a unidirectional composite material.
NASA Astrophysics Data System (ADS)
Suwansukho, Kajpanya; Sumriddetchkajorn, Sarun; Buranasiri, Prathan
2013-06-01
We previously showed that a combination of image thresholding, chain coding, elliptic Fourier descriptors, and artificial neural network analysis provided a low false acceptance rate (FAR) and a false rejection rate (FRR) of 11.0% and 19.0%, respectively, in identify Thai jasmine rice from three unwanted rice varieties. In this work, we highlight that only a polynomial function fitting on the determined chain code and the neural network analysis are highly sufficient in obtaining a very low FAR of < 3.0% and a very low 0.3% FRR for the separation of Thai jasmine rice from Chainat 1 (CNT1), Prathumtani 1 (PTT1), and Hom-Pitsanulok (HPSL) rice varieties. With this proposed approach, the analytical time is tremendously suppressed from 4,250 seconds down to 2 seconds, implying extremely high potential in practical deployment.
A class of reduced-order models in the theory of waves and stability.
Chapman, C J; Sorokin, S V
2016-02-01
This paper presents a class of approximations to a type of wave field for which the dispersion relation is transcendental. The approximations have two defining characteristics: (i) they give the field shape exactly when the frequency and wavenumber lie on a grid of points in the (frequency, wavenumber) plane and (ii) the approximate dispersion relations are polynomials that pass exactly through points on this grid. Thus, the method is interpolatory in nature, but the interpolation takes place in (frequency, wavenumber) space, rather than in physical space. Full details are presented for a non-trivial example, that of antisymmetric elastic waves in a layer. The method is related to partial fraction expansions and barycentric representations of functions. An asymptotic analysis is presented, involving Stirling's approximation to the psi function, and a logarithmic correction to the polynomial dispersion relation.
A point-value enhanced finite volume method based on approximate delta functions
NASA Astrophysics Data System (ADS)
Xuan, Li-Jun; Majdalani, Joseph
2018-02-01
We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.
NASA Technical Reports Server (NTRS)
Chiavassa, G.; Liandrat, J.
1996-01-01
We construct compactly supported wavelet bases satisfying homogeneous boundary conditions on the interval (0,1). The maximum features of multiresolution analysis on the line are retained, including polynomial approximation and tree algorithms. The case of H(sub 0)(sup 1)(0, 1)is detailed, and numerical values, required for the implementation, are provided for the Neumann and Dirichlet boundary conditions.
NASA Astrophysics Data System (ADS)
Punjabi, Alkesh; Ali, Halima; Boozer, Allen; Evans, Todd
2007-11-01
The EFIT data for the DIII-D shot 115467 3000 ms is used to calculate the generating function for an area-preserving map for trajectories of magnetic field lines in the DIII-D. We call this map the DIII-D map. The generating function is a bivariate polynomial in base vectors &1/2circ;cos(θ) and &1/2circ;sin(θ). ψ is toroidal flux and θ is poloidal angle. The generating function is calculated using a canonical transformation from (ψ,θ) to physical coordinates (R,Z) in the DIII-D [1] and nonlinear regression. The equilibrium generating function gives an excellent representation of the equilibrium flux surfaces in the DIII-D. The DIII-D map is then used to calculate effects of the magnetic perturbations in the DIII-D. Preliminary results of the DIII-D map will be presented. This work is supported by US DOE OFES DE-FG02-01ER54624 and DE-FG02-04ER54793. [1] A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys Lett A 364 140--145 (2007).
Bicubic uniform B-spline wavefront fitting technology applied in computer-generated holograms
NASA Astrophysics Data System (ADS)
Cao, Hui; Sun, Jun-qiang; Chen, Guo-jie
2006-02-01
This paper presented a bicubic uniform B-spline wavefront fitting technology to figure out the analytical expression for object wavefront used in Computer-Generated Holograms (CGHs). In many cases, to decrease the difficulty of optical processing, off-axis CGHs rather than complex aspherical surface elements are used in modern advanced military optical systems. In order to design and fabricate off-axis CGH, we have to fit out the analytical expression for object wavefront. Zernike Polynomial is competent for fitting wavefront of centrosymmetric optical systems, but not for axisymmetrical optical systems. Although adopting high-degree polynomials fitting method would achieve higher fitting precision in all fitting nodes, the greatest shortcoming of this method is that any departure from the fitting nodes would result in great fitting error, which is so-called pulsation phenomenon. Furthermore, high-degree polynomials fitting method would increase the calculation time in coding computer-generated hologram and solving basic equation. Basing on the basis function of cubic uniform B-spline and the character mesh of bicubic uniform B-spline wavefront, bicubic uniform B-spline wavefront are described as the product of a series of matrices. Employing standard MATLAB routines, four kinds of different analytical expressions for object wavefront are fitted out by bicubic uniform B-spline as well as high-degree polynomials. Calculation results indicate that, compared with high-degree polynomials, bicubic uniform B-spline is a more competitive method to fit out the analytical expression for object wavefront used in off-axis CGH, for its higher fitting precision and C2 continuity.
Dirac(-Pauli), Fokker-Planck equations and exceptional Laguerre polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Choon-Lin, E-mail: hcl@mail.tku.edu.tw
2011-04-15
Research Highlights: > Physical examples involving exceptional orthogonal polynomials. > Exceptional polynomials as deformations of classical orthogonal polynomials. > Exceptional polynomials from Darboux-Crum transformation. - Abstract: An interesting discovery in the last two years in the field of mathematical physics has been the exceptional X{sub l} Laguerre and Jacobi polynomials. Unlike the well-known classical orthogonal polynomials which start with constant terms, these new polynomials have lowest degree l = 1, 2, and ..., and yet they form complete set with respect to some positive-definite measure. While the mathematical properties of these new X{sub l} polynomials deserve further analysis, it ismore » also of interest to see if they play any role in physical systems. In this paper we indicate some physical models in which these new polynomials appear as the main part of the eigenfunctions. The systems we consider include the Dirac equations coupled minimally and non-minimally with some external fields, and the Fokker-Planck equations. The systems presented here have enlarged the number of exactly solvable physical systems known so far.« less
Solutions of interval type-2 fuzzy polynomials using a new ranking method
NASA Astrophysics Data System (ADS)
Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani
2015-10-01
A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; DeLoach, Richard
2003-01-01
A wind tunnel experiment for characterizing the aerodynamic and propulsion forces and moments acting on a research model airplane is described. The model airplane called the Free-flying Airplane for Sub-scale Experimental Research (FASER), is a modified off-the-shelf radio-controlled model airplane, with 7 ft wingspan, a tractor propeller driven by an electric motor, and aerobatic capability. FASER was tested in the NASA Langley 12-foot Low-Speed Wind Tunnel, using a combination of traditional sweeps and modern experiment design. Power level was included as an independent variable in the wind tunnel test, to allow characterization of power effects on aerodynamic forces and moments. A modeling technique that employs multivariate orthogonal functions was used to develop accurate analytic models for the aerodynamic and propulsion force and moment coefficient dependencies from the wind tunnel data. Efficient methods for generating orthogonal modeling functions, expanding the orthogonal modeling functions in terms of ordinary polynomial functions, and analytical orthogonal blocking were developed and discussed. The resulting models comprise a set of smooth, differentiable functions for the non-dimensional aerodynamic force and moment coefficients in terms of ordinary polynomials in the independent variables, suitable for nonlinear aircraft simulation.
Analysis of the numerical differentiation formulas of functions with large gradients
NASA Astrophysics Data System (ADS)
Tikhovskaya, S. V.
2017-10-01
The solution of a singularly perturbed problem corresponds to a function with large gradients. Therefore the question of interpolation and numerical differentiation of such functions is relevant. The interpolation based on Lagrange polynomials on uniform mesh is widely applied. However, it is known that the use of such interpolation for the function with large gradients leads to estimates that are not uniform with respect to the perturbation parameter and therefore leads to errors of order O(1). To obtain the estimates that are uniform with respect to the perturbation parameter, we can use the polynomial interpolation on a fitted mesh like the piecewise-uniform Shishkin mesh or we can construct on uniform mesh the interpolation formula that is exact on the boundary layer components. In this paper the numerical differentiation formulas for functions with large gradients based on the interpolation formulas on the uniform mesh, which were proposed by A.I. Zadorin, are investigated. The formulas for the first and the second derivatives of the function with two or three interpolation nodes are considered. Error estimates that are uniform with respect to the perturbation parameter are obtained in the particular cases. The numerical results validating the theoretical estimates are discussed.
Quadratic polynomial interpolation on triangular domain
NASA Astrophysics Data System (ADS)
Li, Ying; Zhang, Congcong; Yu, Qian
2018-04-01
In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.
Tachyon inflation in the large-N formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbosa-Cendejas, Nandinii; De-Santiago, Josue; German, Gabriel
2015-11-01
We study tachyon inflation within the large-N formalism, which takes a prescription for the small Hubble flow slow-roll parameter ε{sub 1} as a function of the large number of e-folds N. This leads to a classification of models through their behaviour at large N. In addition to the perturbative N class, we introduce the polynomial and exponential classes for the ε{sub 1} parameter. With this formalism we reconstruct a large number of potentials used previously in the literature for tachyon inflation. We also obtain new families of potentials from the polynomial class. We characterize the realizations of tachyon inflation bymore » computing the usual cosmological observables up to second order in the Hubble flow slow-roll parameters. This allows us to look at observable differences between tachyon and canonical single field inflation. The analysis of observables in light of the Planck 2015 data shows the viability of some of these models, mostly for certain realization of the polynomial and exponential classes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, Alex W.; Rivas, Angel; Huelga, Susana F.
2010-09-15
By using the properties of orthogonal polynomials, we present an exact unitary transformation that maps the Hamiltonian of a quantum system coupled linearly to a continuum of bosonic or fermionic modes to a Hamiltonian that describes a one-dimensional chain with only nearest-neighbor interactions. This analytical transformation predicts a simple set of relations between the parameters of the chain and the recurrence coefficients of the orthogonal polynomials used in the transformation and allows the chain parameters to be computed using numerically stable algorithms that have been developed to compute recurrence coefficients. We then prove some general properties of this chain systemmore » for a wide range of spectral functions and give examples drawn from physical systems where exact analytic expressions for the chain properties can be obtained. Crucially, the short-range interactions of the effective chain system permit these open-quantum systems to be efficiently simulated by the density matrix renormalization group methods.« less
Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation
Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi
2016-01-01
After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t′, n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely. PMID:27792784
Santellano-Estrada, E; Becerril-Pérez, C M; de Alba, J; Chang, Y M; Gianola, D; Torres-Hernández, G; Ramírez-Valverde, R
2008-11-01
This study inferred genetic and permanent environmental variation of milk yield in Tropical Milking Criollo cattle and compared 5 random regression test-day models using Wilmink's function and Legendre polynomials. Data consisted of 15,377 test-day records from 467 Tropical Milking Criollo cows that calved between 1974 and 2006 in the tropical lowlands of the Gulf Coast of Mexico and in southern Nicaragua. Estimated heritabilities of test-day milk yields ranged from 0.18 to 0.45, and repeatabilities ranged from 0.35 to 0.68 for the period spanning from 6 to 400 d in milk. Genetic correlation between days in milk 10 and 400 was around 0.50 but greater than 0.90 for most pairs of test days. The model that used first-order Legendre polynomials for additive genetic effects and second-order Legendre polynomials for permanent environmental effects gave the smallest residual variance and was also favored by the Akaike information criterion and likelihood ratio tests.
Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation.
Yuan, Lifeng; Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi
2016-01-01
After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t', n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely.
From Cycle Rooted Spanning Forests to the Critical Ising Model: an Explicit Construction
NASA Astrophysics Data System (ADS)
de Tilière, Béatrice
2013-04-01
Fisher established an explicit correspondence between the 2-dimensional Ising model defined on a graph G and the dimer model defined on a decorated version {{G}} of this graph (Fisher in J Math Phys 7:1776-1781, 1966). In this paper we explicitly relate the dimer model associated to the critical Ising model and critical cycle rooted spanning forests (CRSFs). This relation is established through characteristic polynomials, whose definition only depends on the respective fundamental domains, and which encode the combinatorics of the model. We first show a matrix-tree type theorem establishing that the dimer characteristic polynomial counts CRSFs of the decorated fundamental domain {{G}_1}. Our main result consists in explicitly constructing CRSFs of {{G}_1} counted by the dimer characteristic polynomial, from CRSFs of G 1, where edges are assigned Kenyon's critical weight function (Kenyon in Invent Math 150(2):409-439, 2002); thus proving a relation on the level of configurations between two well known 2-dimensional critical models.
NASA Astrophysics Data System (ADS)
Delfani, M. R.; Latifi Shahandashti, M.
2017-09-01
In this paper, within the complete form of Mindlin's second strain gradient theory, the elastic field of an isolated spherical inclusion embedded in an infinitely extended homogeneous isotropic medium due to a non-uniform distribution of eigenfields is determined. These eigenfields, in addition to eigenstrain, comprise eigen double and eigen triple strains. After the derivation of a closed-form expression for Green's function associated with the problem, two different cases of non-uniform distribution of the eigenfields are considered as follows: (i) radial distribution, i.e. the distributions of the eigenfields are functions of only the radial distance of points from the centre of inclusion, and (ii) polynomial distribution, i.e. the distributions of the eigenfields are polynomial functions in the Cartesian coordinates of points. While the obtained solution for the elastic field of the latter case takes the form of an infinite series, the solution to the former case is represented in a closed form. Moreover, Eshelby's tensors associated with the two mentioned cases are obtained.
Simple Proof of Jury Test for Complex Polynomials
NASA Astrophysics Data System (ADS)
Choo, Younseok; Kim, Dongmin
Recently some attempts have been made in the literature to give simple proofs of Jury test for real polynomials. This letter presents a similar result for complex polynomials. A simple proof of Jury test for complex polynomials is provided based on the Rouché's Theorem and a single-parameter characterization of Schur stability property for complex polynomials.
USDA-ARS?s Scientific Manuscript database
Objective: To examine the risk factors of developing functional decline and make probabilistic predictions by using a tree-based method that allows higher order polynomials and interactions of the risk factors. Methods: The conditional inference tree analysis, a data mining approach, was used to con...
Translational Bounds for Factorial n and the Factorial Polynomial
ERIC Educational Resources Information Center
Mahmood, Munir; Edwards, Phillip
2009-01-01
During the period 1729-1826 Bernoulli, Euler, Goldbach and Legendre developed expressions for defining and evaluating "n"! and the related gamma function. Expressions related to "n"! and the gamma function are a common feature in computer science and engineering applications. In the modern computer age people live in now, two common tests to…
On the Matrix Exponential Function
ERIC Educational Resources Information Center
Hou, Shui-Hung; Hou, Edwin; Pang, Wan-Kai
2006-01-01
A novel and simple formula for computing the matrix exponential function is presented. Specifically, it can be used to derive explicit formulas for the matrix exponential of a general matrix A satisfying p(A) = 0 for a polynomial p(s). It is ready for use in a classroom and suitable for both hand as well as symbolic computation.
Semiparametric Item Response Functions in the Context of Guessing
ERIC Educational Resources Information Center
Falk, Carl F.; Cai, Li
2016-01-01
We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood-based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…
Calculus of Elementary Functions, Part I. Student Text. Revised Edition.
ERIC Educational Resources Information Center
Herriot, Sarah T.; And Others
This course is intended for students who have a thorough knowledge of college preparatory mathematics, including algebra, axiomatic geometry, trigonometry, and analytic geometry. This text, Part I, contains the first five chapters of the course and two appendices. Chapters included are: (1) Polynomial Functions; (2) The Derivative of a Polynomial…
Dynamic Bidirectional Reflectance Distribution Functions: Measurement and Representation
2008-02-01
be included in the harmonic fits. Other sets of orthogonal functions such as Zernike polynomials have also been used to characterize BRDF and could...reflectance spectra of 3D objects,” Proc. SPIE 4663, 370–378 2001. 13J. R. Shell II, C. Salvagio, and J. R. Schott, “A novel BRDF measurement technique
Quantum mechanics without potential function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alhaidari, A. D., E-mail: haidari@sctp.org.sa; Ismail, M. E. H.
2015-07-15
In the standard formulation of quantum mechanics, one starts by proposing a potential function that models the physical system. The potential is then inserted into the Schrödinger equation, which is solved for the wavefunction, bound states energy spectrum, and/or scattering phase shift. In this work, however, we propose an alternative formulation in which the potential function does not appear. The aim is to obtain a set of analytically realizable systems, which is larger than in the standard formulation and may or may not be associated with any given or previously known potential functions. We start with the wavefunction, which ismore » written as a bounded infinite sum of elements of a complete basis with polynomial coefficients that are orthogonal on an appropriate domain in the energy space. Using the asymptotic properties of these polynomials, we obtain the scattering phase shift, bound states, and resonances. This formulation enables one to handle not only the well-known quantum systems but also previously untreated ones. Illustrative examples are given for two- and three-parameter systems.« less
Ciulla, Carlo; Veljanovski, Dimitar; Rechkoska Shikoska, Ustijana; Risteski, Filip A
2015-11-01
This research presents signal-image post-processing techniques called Intensity-Curvature Measurement Approaches with application to the diagnosis of human brain tumors detected through Magnetic Resonance Imaging (MRI). Post-processing of the MRI of the human brain encompasses the following model functions: (i) bivariate cubic polynomial, (ii) bivariate cubic Lagrange polynomial, (iii) monovariate sinc, and (iv) bivariate linear. The following Intensity-Curvature Measurement Approaches were used: (i) classic-curvature, (ii) signal resilient to interpolation, (iii) intensity-curvature measure and (iv) intensity-curvature functional. The results revealed that the classic-curvature, the signal resilient to interpolation and the intensity-curvature functional are able to add additional information useful to the diagnosis carried out with MRI. The contribution to the MRI diagnosis of our study are: (i) the enhanced gray level scale of the tumor mass and the well-behaved representation of the tumor provided through the signal resilient to interpolation, and (ii) the visually perceptible third dimension perpendicular to the image plane provided through the classic-curvature and the intensity-curvature functional.
Ciulla, Carlo; Veljanovski, Dimitar; Rechkoska Shikoska, Ustijana; Risteski, Filip A.
2015-01-01
This research presents signal-image post-processing techniques called Intensity-Curvature Measurement Approaches with application to the diagnosis of human brain tumors detected through Magnetic Resonance Imaging (MRI). Post-processing of the MRI of the human brain encompasses the following model functions: (i) bivariate cubic polynomial, (ii) bivariate cubic Lagrange polynomial, (iii) monovariate sinc, and (iv) bivariate linear. The following Intensity-Curvature Measurement Approaches were used: (i) classic-curvature, (ii) signal resilient to interpolation, (iii) intensity-curvature measure and (iv) intensity-curvature functional. The results revealed that the classic-curvature, the signal resilient to interpolation and the intensity-curvature functional are able to add additional information useful to the diagnosis carried out with MRI. The contribution to the MRI diagnosis of our study are: (i) the enhanced gray level scale of the tumor mass and the well-behaved representation of the tumor provided through the signal resilient to interpolation, and (ii) the visually perceptible third dimension perpendicular to the image plane provided through the classic-curvature and the intensity-curvature functional. PMID:26644943
Direct localization of poles of a meromorphic function from measurements on an incomplete boundary
NASA Astrophysics Data System (ADS)
Nara, Takaaki; Ando, Shigeru
2010-01-01
This paper proposes an algebraic method to reconstruct the positions of multiple poles in a meromorphic function field from measurements on an arbitrary simple arc in it. A novel issue is the exactness of the algorithm depending on whether the arc is open or closed, and whether it encloses or does not enclose the poles. We first obtain a differential equation that can equivalently determine the meromorphic function field. From it, we derive linear equations that relate the elementary symmetric polynomials of the pole positions to weighted integrals of the field along the simple arc and end-point terms of the arc when it is an open one. Eliminating the end-point terms based on an appropriate choice of weighting functions and a combination of the linear equations, we obtain a simple system of linear equations for solving the elementary symmetric polynomials. We also show that our algorithm can be applied to a 2D electric impedance tomography problem. The effects of the proximity of the poles, the number of measurements and noise on the localization accuracy are numerically examined.
A new order-theoretic characterisation of the polytime computable functions☆
Avanzini, Martin; Eguchi, Naohi; Moser, Georg
2015-01-01
We propose a new order-theoretic characterisation of the class of polytime computable functions. To this avail we define the small polynomial path order (sPOP⁎ for short). This termination order entails a new syntactic method to analyse the innermost runtime complexity of term rewrite systems fully automatically: for any rewrite system compatible with sPOP⁎ that employs recursion up to depth d, the (innermost) runtime complexity is polynomially bounded of degree d. This bound is tight. Thus we obtain a direct correspondence between a syntactic (and easily verifiable) condition of a program and the asymptotic worst-case complexity of the program. PMID:26412933
A polynomial primal-dual Dikin-type algorithm for linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jansen, B.; Roos, R.; Terlaky, T.
1994-12-31
We present a new primal-dual affine scaling method for linear programming. The search direction is obtained by using Dikin`s original idea: minimize the objective function (which is the duality gap in a primal-dual algorithm) over a suitable ellipsoid. The search direction has no obvious relationship with the directions proposed in the literature so far. It guarantees a significant decrease in the duality gap in each iteration, and at the same time drives the iterates to the central path. The method admits a polynomial complexity bound that is better than the one for Monteiro et al.`s original primal-dual affine scaling method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurvitis, Leonid
2009-01-01
An upper bound on the ergodic capacity of MIMO channels was introduced recently in [1]. This upper bound amounts to the maximization on the simplex of some multilinear polynomial p({lambda}{sub 1}, ..., {lambda}{sub n}) with non-negative coefficients. In general, such maximizations problems are NP-HARD. But if say, the functional log(p) is concave on the simplex and can be efficiently evaluated, then the maximization can also be done efficiently. Such log-concavity was conjectured in [1]. We give in this paper self-contained proof of the conjecture, based on the theory of H-Stable polynomials.
Zhukovsky, K
2014-01-01
We present a general method of operational nature to analyze and obtain solutions for a variety of equations of mathematical physics and related mathematical problems. We construct inverse differential operators and produce operational identities, involving inverse derivatives and families of generalised orthogonal polynomials, such as Hermite and Laguerre polynomial families. We develop the methodology of inverse and exponential operators, employing them for the study of partial differential equations. Advantages of the operational technique, combined with the use of integral transforms, generating functions with exponentials and their integrals, for solving a wide class of partial derivative equations, related to heat, wave, and transport problems, are demonstrated.
Alvermann, A; Fehske, H
2009-04-17
We propose a general numerical approach to open quantum systems with a coupling to bath degrees of freedom. The technique combines the methodology of polynomial expansions of spectral functions with the sparse grid concept from interpolation theory. Thereby we construct a Hilbert space of moderate dimension to represent the bath degrees of freedom, which allows us to perform highly accurate and efficient calculations of static, spectral, and dynamic quantities using standard exact diagonalization algorithms. The strength of the approach is demonstrated for the phase transition, critical behavior, and dissipative spin dynamics in the spin-boson model.
NASA Astrophysics Data System (ADS)
Chen, Zhixiang; Fu, Bin
This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.
Mafusire, Cosmas; Krüger, Tjaart P J
2018-06-01
The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.
Nodal Statistics for the Van Vleck Polynomials
NASA Astrophysics Data System (ADS)
Bourget, Alain
The Van Vleck polynomials naturally arise from the generalized Lamé equation
Legendre modified moments for Euler's constant
NASA Astrophysics Data System (ADS)
Prévost, Marc
2008-10-01
Polynomial moments are often used for the computation of Gauss quadrature to stabilize the numerical calculation of the orthogonal polynomials, see [W. Gautschi, Computational aspects of orthogonal polynomials, in: P. Nevai (Ed.), Orthogonal Polynomials-Theory and Practice, NATO ASI Series, Series C: Mathematical and Physical Sciences, vol. 294. Kluwer, Dordrecht, 1990, pp. 181-216 [6]; W. Gautschi, On the sensitivity of orthogonal polynomials to perturbations in the moments, Numer. Math. 48(4) (1986) 369-382 [5]; W. Gautschi, On generating orthogonal polynomials, SIAM J. Sci. Statist. Comput. 3(3) (1982) 289-317 [4
NASA Astrophysics Data System (ADS)
Kreyling, Daniel; Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus
2018-03-01
The Extrapolar SWIFT model is a fast ozone chemistry scheme for interactive calculation of the extrapolar stratospheric ozone layer in coupled general circulation models (GCMs). In contrast to the widely used prescribed ozone, the SWIFT ozone layer interacts with the model dynamics and can respond to atmospheric variability or climatological trends.The Extrapolar SWIFT model employs a repro-modelling approach, in which algebraic functions are used to approximate the numerical output of a full stratospheric chemistry and transport model (ATLAS). The full model solves a coupled chemical differential equation system with 55 initial and boundary conditions (mixing ratio of various chemical species and atmospheric parameters). Hence the rate of change of ozone over 24 h is a function of 55 variables. Using covariances between these variables, we can find linear combinations in order to reduce the parameter space to the following nine basic variables: latitude, pressure altitude, temperature, overhead ozone column and the mixing ratio of ozone and of the ozone-depleting families (Cly, Bry, NOy and HOy). We will show that these nine variables are sufficient to characterize the rate of change of ozone. An automated procedure fits a polynomial function of fourth degree to the rate of change of ozone obtained from several simulations with the ATLAS model. One polynomial function is determined per month, which yields the rate of change of ozone over 24 h. A key aspect for the robustness of the Extrapolar SWIFT model is to include a wide range of stratospheric variability in the numerical output of the ATLAS model, also covering atmospheric states that will occur in a future climate (e.g. temperature and meridional circulation changes or reduction of stratospheric chlorine loading).For validation purposes, the Extrapolar SWIFT model has been integrated into the ATLAS model, replacing the full stratospheric chemistry scheme. Simulations with SWIFT in ATLAS have proven that the systematic error is small and does not accumulate during the course of a simulation. In the context of a 10-year simulation, the ozone layer simulated by SWIFT shows a stable annual cycle, with inter-annual variations comparable to the ATLAS model. The application of Extrapolar SWIFT requires the evaluation of polynomial functions with 30-100 terms. Computers can currently calculate such polynomial functions at thousands of model grid points in seconds. SWIFT provides the desired numerical efficiency and computes the ozone layer 104 times faster than the chemistry scheme in the ATLAS CTM.
Fast decoder for local quantum codes using Groebner basis
NASA Astrophysics Data System (ADS)
Haah, Jeongwan
2013-03-01
Based on arXiv:1204.1063. A local translation-invariant quantum code has a description in terms of Laurent polynomials. As an application of this observation, we present a fast decoding algorithm for translation-invariant local quantum codes in any spatial dimensions using the straightforward division algorithm for multivariate polynomials. The running time is O (n log n) on average, or O (n2 log n) on worst cases, where n is the number of physical qubits. The algorithm improves a subroutine of the renormalization-group decoder by Bravyi and Haah (arXiv:1112.3252) in the translation-invariant case. This work is supported in part by the Insitute for Quantum Information and Matter, an NSF Physics Frontier Center, and the Korea Foundation for Advanced Studies.
Rdzanek, Wojciech P
2016-06-01
This study deals with the classical problem of sound radiation of an excited clamped circular plate embedded into a flat rigid baffle. The system of the two coupled differential equations is solved, one for the excited and damped vibrations of the plate and the other one-the Helmholtz equation. An approach using the expansion into radial polynomials leads to results for the modal impedance coefficients useful for a comprehensive numerical analysis of sound radiation. The results obtained are accurate and efficient in a wide low frequency range and can easily be adopted for a simply supported circular plate. The fluid loading is included providing accurate results in resonance.
Rational integrability of trigonometric polynomial potentials on the flat torus
NASA Astrophysics Data System (ADS)
Combot, Thierry
2017-07-01
We consider a lattice ℒ ⊂ ℝ n and a trigonometric potential V with frequencies k ∈ ℒ. We then prove a strong rational integrability condition on V, using the support of its Fourier transform. We then use this condition to prove that a real trigonometric polynomial potential is rationally integrable if and only if it separates up to rotation of the coordinates. Removing the real condition, we also make a classification of rationally integrable potentials in dimensions 2 and 3 and recover several integrable cases. After a complex change of variables, these potentials become real and correspond to generalized Toda integrable potentials. Moreover, along the proof, some of them with high-degree first integrals are explicitly integrated.
Solution of the two-dimensional spectral factorization problem
NASA Technical Reports Server (NTRS)
Lawton, W. M.
1985-01-01
An approximation theorem is proven which solves a classic problem in two-dimensional (2-D) filter theory. The theorem shows that any continuous two-dimensional spectrum can be uniformly approximated by the squared modulus of a recursively stable finite trigonometric polynomial supported on a nonsymmetric half-plane.
On multiple orthogonal polynomials for discrete Meixner measures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorokin, Vladimir N
2010-12-07
The paper examines two examples of multiple orthogonal polynomials generalizing orthogonal polynomials of a discrete variable, meaning thereby the Meixner polynomials. One example is bound up with a discrete Nikishin system, and the other leads to essentially new effects. The limit distribution of the zeros of polynomials is obtained in terms of logarithmic equilibrium potentials and in terms of algebraic curves. Bibliography: 9 titles.
Decomposition Theory in the Teaching of Elementary Linear Algebra.
ERIC Educational Resources Information Center
London, R. R.; Rogosinski, H. P.
1990-01-01
Described is a decomposition theory from which the Cayley-Hamilton theorem, the diagonalizability of complex square matrices, and functional calculus can be developed. The theory and its applications are based on elementary polynomial algebra. (KR)
Analog Computation by DNA Strand Displacement Circuits.
Song, Tianqi; Garg, Sudhanshu; Mokhtar, Reem; Bui, Hieu; Reif, John
2016-08-19
DNA circuits have been widely used to develop biological computing devices because of their high programmability and versatility. Here, we propose an architecture for the systematic construction of DNA circuits for analog computation based on DNA strand displacement. The elementary gates in our architecture include addition, subtraction, and multiplication gates. The input and output of these gates are analog, which means that they are directly represented by the concentrations of the input and output DNA strands, respectively, without requiring a threshold for converting to Boolean signals. We provide detailed domain designs and kinetic simulations of the gates to demonstrate their expected performance. On the basis of these gates, we describe how DNA circuits to compute polynomial functions of inputs can be built. Using Taylor Series and Newton Iteration methods, functions beyond the scope of polynomials can also be computed by DNA circuits built upon our architecture.
Free and Forced Vibrations of Thick-Walled Anisotropic Cylindrical Shells
NASA Astrophysics Data System (ADS)
Marchuk, A. V.; Gnedash, S. V.; Levkovskii, S. A.
2017-03-01
Two approaches to studying the free and forced axisymmetric vibrations of cylindrical shell are proposed. They are based on the three-dimensional theory of elasticity and division of the original cylindrical shell with concentric cross-sectional circles into several coaxial cylindrical shells. One approach uses linear polynomials to approximate functions defined in plan and across the thickness. The other approach also uses linear polynomials to approximate functions defined in plan, but their variation with thickness is described by the analytical solution of a system of differential equations. Both approaches have approximation and arithmetic errors. When determining the natural frequencies by the semi-analytical finite-element method in combination with the divide and conqure method, it is convenient to find the initial frequencies by the finite-element method. The behavior of the shell during free and forced vibrations is analyzed in the case where the loading area is half the shell thickness
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)
2007-01-01
A method and system for data modeling that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The invention partitions the parameters into a first set of s simple parameters, where observable data are expressible as low order polynomials, and c complex parameters that reflect more complicated variation of the observed data. Variation of the data with the simple parameters is modeled using polynomials; and variation of the data with the complex parameters at each vertex is analyzed using a neural network. Variations with the simple parameters and with the complex parameters are expressed using a first sequence of shape functions and a second sequence of neural network functions. The first and second sequences are multiplicatively combined to form a composite response surface, dependent upon the parameter values, that can be used to identify an accurate mode
Partial regularity of weak solutions to a PDE system with cubic nonlinearity
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Xu, Xiangsheng
2018-04-01
In this paper we investigate regularity properties of weak solutions to a PDE system that arises in the study of biological transport networks. The system consists of a possibly singular elliptic equation for the scalar pressure of the underlying biological network coupled to a diffusion equation for the conductance vector of the network. There are several different types of nonlinearities in the system. Of particular mathematical interest is a term that is a polynomial function of solutions and their partial derivatives and this polynomial function has degree three. That is, the system contains a cubic nonlinearity. Only weak solutions to the system have been shown to exist. The regularity theory for the system remains fundamentally incomplete. In particular, it is not known whether or not weak solutions develop singularities. In this paper we obtain a partial regularity theorem, which gives an estimate for the parabolic Hausdorff dimension of the set of possible singular points.
Differential Galois theory and non-integrability of planar polynomial vector fields
NASA Astrophysics Data System (ADS)
Acosta-Humánez, Primitivo B.; Lázaro, J. Tomás; Morales-Ruiz, Juan J.; Pantazi, Chara
2018-06-01
We study a necessary condition for the integrability of the polynomials vector fields in the plane by means of the differential Galois Theory. More concretely, by means of the variational equations around a particular solution it is obtained a necessary condition for the existence of a rational first integral. The method is systematic starting with the first order variational equation. We illustrate this result with several families of examples. A key point is to check whether a suitable primitive is elementary or not. Using a theorem by Liouville, the problem is equivalent to the existence of a rational solution of a certain first order linear equation, the Risch equation. This is a classical problem studied by Risch in 1969, and the solution is given by the "Risch algorithm". In this way we point out the connection of the non integrability with some higher transcendent functions, like the error function.
Tolerance analysis of optical telescopes using coherent addition of wavefront errors
NASA Technical Reports Server (NTRS)
Davenport, J. W.
1982-01-01
A near diffraction-limited telescope requires that tolerance analysis be done on the basis of system wavefront error. One method of analyzing the wavefront error is to represent the wavefront error function in terms of its Zernike polynomial expansion. A Ramsey-Korsch ray trace package, a computer program that simulates the tracing of rays through an optical telescope system, was expanded to include the Zernike polynomial expansion up through the fifth-order spherical term. An option to determine a 3 dimensional plot of the wavefront error function was also included in the Ramsey-Korsch package. Several assimulation runs were analyzed to determine the particular set of coefficients in the Zernike expansion that are effected by various errors such as tilt, decenter and despace. A 3 dimensional plot of each error up through the fifth-order spherical term was also included in the study. Tolerance analysis data are presented.
Direct calculation of modal parameters from matrix orthogonal polynomials
NASA Astrophysics Data System (ADS)
El-Kafafy, Mahmoud; Guillaume, Patrick
2011-10-01
The object of this paper is to introduce a new technique to derive the global modal parameter (i.e. system poles) directly from estimated matrix orthogonal polynomials. This contribution generalized the results given in Rolain et al. (1994) [5] and Rolain et al. (1995) [6] for scalar orthogonal polynomials to multivariable (matrix) orthogonal polynomials for multiple input multiple output (MIMO) system. Using orthogonal polynomials improves the numerical properties of the estimation process. However, the derivation of the modal parameters from the orthogonal polynomials is in general ill-conditioned if not handled properly. The transformation of the coefficients from orthogonal polynomials basis to power polynomials basis is known to be an ill-conditioned transformation. In this paper a new approach is proposed to compute the system poles directly from the multivariable orthogonal polynomials. High order models can be used without any numerical problems. The proposed method will be compared with existing methods (Van Der Auweraer and Leuridan (1987) [4] Chen and Xu (2003) [7]). For this comparative study, simulated as well as experimental data will be used.
Application of overlay modeling and control with Zernike polynomials in an HVM environment
NASA Astrophysics Data System (ADS)
Ju, JaeWuk; Kim, MinGyu; Lee, JuHan; Nabeth, Jeremy; Jeon, Sanghuck; Heo, Hoyoung; Robinson, John C.; Pierson, Bill
2016-03-01
Shrinking technology nodes and smaller process margins require improved photolithography overlay control. Generally, overlay measurement results are modeled with Cartesian polynomial functions for both intra-field and inter-field models and the model coefficients are sent to an advanced process control (APC) system operating in an XY Cartesian basis. Dampened overlay corrections, typically via exponentially or linearly weighted moving average in time, are then retrieved from the APC system to apply on the scanner in XY Cartesian form for subsequent lot exposure. The goal of the above method is to process lots with corrections that target the least possible overlay misregistration in steady state as well as in change point situations. In this study, we model overlay errors on product using Zernike polynomials with same fitting capability as the process of reference (POR) to represent the wafer-level terms, and use the standard Cartesian polynomials to represent the field-level terms. APC calculations for wafer-level correction are performed in Zernike basis while field-level calculations use standard XY Cartesian basis. Finally, weighted wafer-level correction terms are converted to XY Cartesian space in order to be applied on the scanner, along with field-level corrections, for future wafer exposures. Since Zernike polynomials have the property of being orthogonal in the unit disk we are able to reduce the amount of collinearity between terms and improve overlay stability. Our real time Zernike modeling and feedback evaluation was performed on a 20-lot dataset in a high volume manufacturing (HVM) environment. The measured on-product results were compared to POR and showed a 7% reduction in overlay variation including a 22% terms variation. This led to an on-product raw overlay Mean + 3Sigma X&Y improvement of 5% and resulted in 0.1% yield improvement.
Highly Accurate Beam Torsion Solutions Using the p-Version Finite Element Method
NASA Technical Reports Server (NTRS)
Smith, James P.
1996-01-01
A new treatment of the classical beam torsion boundary value problem is applied. Using the p-version finite element method with shape functions based on Legendre polynomials, torsion solutions for generic cross-sections comprised of isotropic materials are developed. Element shape functions for quadrilateral and triangular elements are discussed, and numerical examples are provided.
Semi-Parametric Item Response Functions in the Context of Guessing. CRESST Report 844
ERIC Educational Resources Information Center
Falk, Carl F.; Cai, Li
2015-01-01
We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…
Computational complexity of ecological and evolutionary spatial dynamics
Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A.
2015-01-01
There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP). PMID:26644569
Prediction of zeolite-cement-sand unconfined compressive strength using polynomial neural network
NASA Astrophysics Data System (ADS)
MolaAbasi, H.; Shooshpasha, I.
2016-04-01
The improvement of local soils with cement and zeolite can provide great benefits, including strengthening slopes in slope stability problems, stabilizing problematic soils and preventing soil liquefaction. Recently, dosage methodologies are being developed for improved soils based on a rational criterion as it exists in concrete technology. There are numerous earlier studies showing the possibility of relating Unconfined Compressive Strength (UCS) and Cemented sand (CS) parameters (voids/cement ratio) as a power function fits. Taking into account the fact that the existing equations are incapable of estimating UCS for zeolite cemented sand mixture (ZCS) well, artificial intelligence methods are used for forecasting them. Polynomial-type neural network is applied to estimate the UCS from more simply determined index properties such as zeolite and cement content, porosity as well as curing time. In order to assess the merits of the proposed approach, a total number of 216 unconfined compressive tests have been done. A comparison is carried out between the experimentally measured UCS with the predictions in order to evaluate the performance of the current method. The results demonstrate that generalized polynomial-type neural network has a great ability for prediction of the UCS. At the end sensitivity analysis of the polynomial model is applied to study the influence of input parameters on model output. The sensitivity analysis reveals that cement and zeolite content have significant influence on predicting UCS.
NASA Astrophysics Data System (ADS)
Xu, Chong; Dai, Fuchu; Xu, Xiwei; Lee, Yuan Hsi
2012-04-01
Support vector machine (SVM) modeling is based on statistical learning theory. It involves a training phase with associated input and target output values. In recent years, the method has become increasingly popular. The main purpose of this study is to evaluate the mapping power of SVM modeling in earthquake triggered landslide-susceptibility mapping for a section of the Jianjiang River watershed using a Geographic Information System (GIS) software. The river was affected by the Wenchuan earthquake of May 12, 2008. Visual interpretation of colored aerial photographs of 1-m resolution and extensive field surveys provided a detailed landslide inventory map containing 3147 landslides related to the 2008 Wenchuan earthquake. Elevation, slope angle, slope aspect, distance from seismogenic faults, distance from drainages, and lithology were used as the controlling parameters. For modeling, three groups of positive and negative training samples were used in concert with four different kernel functions. Positive training samples include the centroids of 500 large landslides, those of all 3147 landslides, and 5000 randomly selected points in landslide polygons. Negative training samples include 500, 3147, and 5000 randomly selected points on slopes that remained stable during the Wenchuan earthquake. The four kernel functions are linear, polynomial, radial basis, and sigmoid. In total, 12 cases of landslide susceptibility were mapped. Comparative analyses of landslide-susceptibility probability and area relation curves show that both the polynomial and radial basis functions suitably classified the input data as either landslide positive or negative though the radial basis function was more successful. The 12 generated landslide-susceptibility maps were compared with known landslide centroid locations and landslide polygons to verify the success rate and predictive accuracy of each model. The 12 results were further validated using area-under-curve analysis. Group 3 with 5000 randomly selected points on the landslide polygons, and 5000 randomly selected points along stable slopes gave the best results with a success rate of 79.20% and predictive accuracy of 79.13% under the radial basis function. Of all the results, the sigmoid kernel function was the least skillful when used in concert with the centroid data of all 3147 landslides as positive training samples, and the negative training samples of 3147 randomly selected points in regions of stable slope (success rate = 54.95%; predictive accuracy = 61.85%). This paper also provides suggestions and reference data for selecting appropriate training samples and kernel function types for earthquake triggered landslide-susceptibility mapping using SVM modeling. Predictive landslide-susceptibility maps could be useful in hazard mitigation by helping planners understand the probability of landslides in different regions.
Application of field dependent polynomial model
NASA Astrophysics Data System (ADS)
Janout, Petr; Páta, Petr; Skala, Petr; Fliegel, Karel; Vítek, Stanislav; Bednář, Jan
2016-09-01
Extremely wide-field imaging systems have many advantages regarding large display scenes whether for use in microscopy, all sky cameras, or in security technologies. The Large viewing angle is paid by the amount of aberrations, which are included with these imaging systems. Modeling wavefront aberrations using the Zernike polynomials is known a longer time and is widely used. Our method does not model system aberrations in a way of modeling wavefront, but directly modeling of aberration Point Spread Function of used imaging system. This is a very complicated task, and with conventional methods, it was difficult to achieve the desired accuracy. Our optimization techniques of searching coefficients space-variant Zernike polynomials can be described as a comprehensive model for ultra-wide-field imaging systems. The advantage of this model is that the model describes the whole space-variant system, unlike the majority models which are partly invariant systems. The issue that this model is the attempt to equalize the size of the modeled Point Spread Function, which is comparable to the pixel size. Issues associated with sampling, pixel size, pixel sensitivity profile must be taken into account in the design. The model was verified in a series of laboratory test patterns, test images of laboratory light sources and consequently on real images obtained by an extremely wide-field imaging system WILLIAM. Results of modeling of this system are listed in this article.
Rational approximations of f(R) cosmography through Pad'e polynomials
NASA Astrophysics Data System (ADS)
Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando
2018-05-01
We consider high-redshift f(R) cosmography adopting the technique of polynomial reconstruction. In lieu of considering Taylor treatments, which turn out to be non-predictive as soon as z>1, we take into account the Pad&apose rational approximations which consist in performing expansions converging at high redshift domains. Particularly, our strategy is to reconstruct f(z) functions first, assuming the Ricci scalar to be invertible with respect to the redshift z. Having the so-obtained f(z) functions, we invert them and we easily obtain the corresponding f(R) terms. We minimize error propagation, assuming no errors upon redshift data. The treatment we follow naturally leads to evaluating curvature pressure, density and equation of state, characterizing the universe evolution at redshift much higher than standard cosmographic approaches. We therefore match these outcomes with small redshift constraints got by framing the f(R) cosmology through Taylor series around 0zsimeq . This gives rise to a calibration procedure with small redshift that enables the definitions of polynomial approximations up to zsimeq 10. Last but not least, we show discrepancies with the standard cosmological model which go towards an extension of the ΛCDM paradigm, indicating an effective dark energy term evolving in time. We finally describe the evolution of our effective dark energy term by means of basic techniques of data mining.
Hadamard Factorization of Stable Polynomials
NASA Astrophysics Data System (ADS)
Loredo-Villalobos, Carlos Arturo; Aguirre-Hernández, Baltazar
2011-11-01
The stable (Hurwitz) polynomials are important in the study of differential equations systems and control theory (see [7] and [19]). A property of these polynomials is related to Hadamard product. Consider two polynomials p,q ∈ R[x]:p(x) = anxn+an-1xn-1+...+a1x+a0q(x) = bmx m+bm-1xm-1+...+b1x+b0the Hadamard product (p × q) is defined as (p×q)(x) = akbkxk+ak-1bk-1xk-1+...+a1b1x+a0b0where k = min(m,n). Some results (see [16]) shows that if p,q ∈R[x] are stable polynomials then (p×q) is stable, also, i.e. the Hadamard product is closed; however, the reciprocal is not always true, that is, not all stable polynomial has a factorization into two stable polynomials the same degree n, if n> 4 (see [15]).In this work we will give some conditions to Hadamard factorization existence for stable polynomials.
Recursive formulas for determining perturbing accelerations in intermediate satellite motion
NASA Astrophysics Data System (ADS)
Stoianov, L.
Recursive formulas for Legendre polynomials and associated Legendre functions are used to obtain recursive relationships for determining acceleration components which perturb intermediate satellite motion. The formulas are applicable in all cases when the perturbation force function is presented as a series in spherical functions (gravitational, tidal, thermal, geomagnetic, and other perturbations of intermediate motion). These formulas can be used to determine the order of perturbing accelerations.
Stable Numerical Approach for Fractional Delay Differential Equations
NASA Astrophysics Data System (ADS)
Singh, Harendra; Pandey, Rajesh K.; Baleanu, D.
2017-12-01
In this paper, we present a new stable numerical approach based on the operational matrix of integration of Jacobi polynomials for solving fractional delay differential equations (FDDEs). The operational matrix approach converts the FDDE into a system of linear equations, and hence the numerical solution is obtained by solving the linear system. The error analysis of the proposed method is also established. Further, a comparative study of the approximate solutions is provided for the test examples of the FDDE by varying the values of the parameters in the Jacobi polynomials. As in special case, the Jacobi polynomials reduce to the well-known polynomials such as (1) Legendre polynomial, (2) Chebyshev polynomial of second kind, (3) Chebyshev polynomial of third and (4) Chebyshev polynomial of fourth kind respectively. Maximum absolute error and root mean square error are calculated for the illustrated examples and presented in form of tables for the comparison purpose. Numerical stability of the presented method with respect to all four kind of polynomials are discussed. Further, the obtained numerical results are compared with some known methods from the literature and it is observed that obtained results from the proposed method is better than these methods.
Percolation critical polynomial as a graph invariant
Scullard, Christian R.
2012-10-18
Every lattice for which the bond percolation critical probability can be found exactly possesses a critical polynomial, with the root in [0; 1] providing the threshold. Recent work has demonstrated that this polynomial may be generalized through a definition that can be applied on any periodic lattice. The polynomial depends on the lattice and on its decomposition into identical finite subgraphs, but once these are specified, the polynomial is essentially unique. On lattices for which the exact percolation threshold is unknown, the polynomials provide approximations for the critical probability with the estimates appearing to converge to the exact answer withmore » increasing subgraph size. In this paper, I show how the critical polynomial can be viewed as a graph invariant like the Tutte polynomial. In particular, the critical polynomial is computed on a finite graph and may be found using the deletion-contraction algorithm. This allows calculation on a computer, and I present such results for the kagome lattice using subgraphs of up to 36 bonds. For one of these, I find the prediction p c = 0:52440572:::, which differs from the numerical value, p c = 0:52440503(5), by only 6:9 X 10 -7.« less
[Study on application of SVM in prediction of coronary heart disease].
Zhu, Yue; Wu, Jianghua; Fang, Ying
2013-12-01
Base on the data of blood pressure, plasma lipid, Glu and UA by physical test, Support Vector Machine (SVM) was applied to identify coronary heart disease (CHD) in patients and non-CHD individuals in south China population for guide of further prevention and treatment of the disease. Firstly, the SVM classifier was built using radial basis kernel function, liner kernel function and polynomial kernel function, respectively. Secondly, the SVM penalty factor C and kernel parameter sigma were optimized by particle swarm optimization (PSO) and then employed to diagnose and predict the CHD. By comparison with those from artificial neural network with the back propagation (BP) model, linear discriminant analysis, logistic regression method and non-optimized SVM, the overall results of our calculation demonstrated that the classification performance of optimized RBF-SVM model could be superior to other classifier algorithm with higher accuracy rate, sensitivity and specificity, which were 94.51%, 92.31% and 96.67%, respectively. So, it is well concluded that SVM could be used as a valid method for assisting diagnosis of CHD.
Dong, Jian-Jun; Li, Qing-Liang; Yin, Hua; Zhong, Cheng; Hao, Jun-Guang; Yang, Pan-Fei; Tian, Yu-Hong; Jia, Shi-Ru
2014-10-15
Sensory evaluation is regarded as a necessary procedure to ensure a reproducible quality of beer. Meanwhile, high-throughput analytical methods provide a powerful tool to analyse various flavour compounds, such as higher alcohol and ester. In this study, the relationship between flavour compounds and sensory evaluation was established by non-linear models such as partial least squares (PLS), genetic algorithm back-propagation neural network (GA-BP), support vector machine (SVM). It was shown that SVM with a Radial Basis Function (RBF) had a better performance of prediction accuracy for both calibration set (94.3%) and validation set (96.2%) than other models. Relatively lower prediction abilities were observed for GA-BP (52.1%) and PLS (31.7%). In addition, the kernel function of SVM played an essential role of model training when the prediction accuracy of SVM with polynomial kernel function was 32.9%. As a powerful multivariate statistics method, SVM holds great potential to assess beer quality. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gavignet, A.A.; Wick, C.J.
In current practice, pressure drops in the mud circulating system and the settling velocity of cuttings are calculated with simple rheological models and simple equations. Wellsite computers now allow more sophistication in drilling computations. In this paper, experimental results on the settling velocity of spheres in drilling fluids are reported, along with rheograms done over a wide range of shear rates. The flow curves are fitted to polynomials and general methods are developed to predict friction losses and settling velocities as functions of the polynomial coefficients. These methods were incorporated in a software package that can handle any rig configurationmore » system, including riser booster. Graphic displays show the effect of each parameter on the performance of the circulating system.« less