Pointwise convergence of derivatives of Lagrange interpolation polynomials for exponential weights
NASA Astrophysics Data System (ADS)
Damelin, S. B.; Jung, H. S.
2005-01-01
For a general class of exponential weights on the line and on (-1,1), we study pointwise convergence of the derivatives of Lagrange interpolation. Our weights include even weights of smooth polynomial decay near +/-[infinity] (Freud weights), even weights of faster than smooth polynomial decay near +/-[infinity] (Erdos weights) and even weights which vanish strongly near +/-1, for example Pollaczek type weights.
Visualizing and Understanding the Components of Lagrange and Newton Interpolation
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2016-01-01
This article takes a close look at Lagrange and Newton interpolation by graphically examining the component functions of each of these formulas. Although interpolation methods are often considered simply to be computational procedures, we demonstrate how the components of the polynomial terms in these formulas provide insight into where these…
A Novel Multi-Receiver Signcryption Scheme with Complete Anonymity.
Pang, Liaojun; Yan, Xuxia; Zhao, Huiyang; Hu, Yufei; Li, Huixian
2016-01-01
Anonymity, which is more and more important to multi-receiver schemes, has been taken into consideration by many researchers recently. To protect the receiver anonymity, in 2010, the first multi-receiver scheme based on the Lagrange interpolating polynomial was proposed. To ensure the sender's anonymity, the concept of the ring signature was proposed in 2005, but afterwards, this scheme was proven to has some weakness and at the same time, a completely anonymous multi-receiver signcryption scheme is proposed. In this completely anonymous scheme, the sender anonymity is achieved by improving the ring signature, and the receiver anonymity is achieved by also using the Lagrange interpolating polynomial. Unfortunately, the Lagrange interpolation method was proven a failure to protect the anonymity of receivers, because each authorized receiver could judge whether anyone else is authorized or not. Therefore, the completely anonymous multi-receiver signcryption mentioned above can only protect the sender anonymity. In this paper, we propose a new completely anonymous multi-receiver signcryption scheme with a new polynomial technology used to replace the Lagrange interpolating polynomial, which can mix the identity information of receivers to save it as a ciphertext element and prevent the authorized receivers from verifying others. With the receiver anonymity, the proposed scheme also owns the anonymity of the sender at the same time. Meanwhile, the decryption fairness and public verification are also provided.
Error estimates of Lagrange interpolation and orthonormal expansions for Freud weights
NASA Astrophysics Data System (ADS)
Kwon, K. H.; Lee, D. W.
2001-08-01
Let Sn[f] be the nth partial sum of the orthonormal polynomials expansion with respect to a Freud weight. Then we obtain sufficient conditions for the boundedness of Sn[f] and discuss the speed of the convergence of Sn[f] in weighted Lp space. We also find sufficient conditions for the boundedness of the Lagrange interpolation polynomial Ln[f], whose nodal points are the zeros of orthonormal polynomials with respect to a Freud weight. In particular, if W(x)=e-(1/2)x2 is the Hermite weight function, then we obtain sufficient conditions for the inequalities to hold:andwhere and k=0,1,2...,r.
A Lagrange-type projector on the real line
NASA Astrophysics Data System (ADS)
Mastroianni, G.; Notarangelo, I.
2010-01-01
We introduce an interpolation process based on some of the zeros of the m th generalized Freud polynomial. Convergence results and error estimates are given. In particular we show that, in some important function spaces, the interpolating polynomial behaves like the best approximation. Moreover the stability and the convergence of some quadrature rules are proved.
Using multi-dimensional Smolyak interpolation to make a sum-of-products potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avila, Gustavo, E-mail: Gustavo-Avila@telefonica.net; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca
2015-07-28
We propose a new method for obtaining potential energy surfaces in sum-of-products (SOP) form. If the number of terms is small enough, a SOP potential surface significantly reduces the cost of quantum dynamics calculations by obviating the need to do multidimensional integrals by quadrature. The method is based on a Smolyak interpolation technique and uses polynomial-like or spectral basis functions and 1D Lagrange-type functions. When written in terms of the basis functions from which the Lagrange-type functions are built, the Smolyak interpolant has only a modest number of terms. The ideas are tested for HONO (nitrous acid)
2009-03-01
the 1- D local basis functions. The 1-D Lagrange polynomial local basis function, using Legendre -Gauss-Lobatto interpolation points, was defined by...cases were run using 10th order polynomials , with contours from -0.05 to 0.525 K with an interval of 0.025 K...after 700 s for reso- lutions: (a) 20, (b) 10, and (c) 5 m. All cases were run using 10th order polynomials , with contours from -0.05 to 0.525 K
Sandia Higher Order Elements (SHOE) v 0.5 alpha
DOE Office of Scientific and Technical Information (OSTI.GOV)
2013-09-24
SHOE is research code for characterizing and visualizing higher-order finite elements; it contains a framework for defining classes of interpolation techniques and element shapes; methods for interpolating triangular, quadrilateral, tetrahedral, and hexahedral cells using Lagrange and Legendre polynomial bases of arbitrary order; methods to decompose each element into domains of constant gradient flow (using a polynomial solver to identify critical points); and an isocontouring technique that uses this decomposition to guarantee topological correctness. Please note that this is an alpha release of research software and that some time has passed since it was actively developed; build- and run-time issues likelymore » exist.« less
Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan
2012-01-01
Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.
Analysis of the numerical differentiation formulas of functions with large gradients
NASA Astrophysics Data System (ADS)
Tikhovskaya, S. V.
2017-10-01
The solution of a singularly perturbed problem corresponds to a function with large gradients. Therefore the question of interpolation and numerical differentiation of such functions is relevant. The interpolation based on Lagrange polynomials on uniform mesh is widely applied. However, it is known that the use of such interpolation for the function with large gradients leads to estimates that are not uniform with respect to the perturbation parameter and therefore leads to errors of order O(1). To obtain the estimates that are uniform with respect to the perturbation parameter, we can use the polynomial interpolation on a fitted mesh like the piecewise-uniform Shishkin mesh or we can construct on uniform mesh the interpolation formula that is exact on the boundary layer components. In this paper the numerical differentiation formulas for functions with large gradients based on the interpolation formulas on the uniform mesh, which were proposed by A.I. Zadorin, are investigated. The formulas for the first and the second derivatives of the function with two or three interpolation nodes are considered. Error estimates that are uniform with respect to the perturbation parameter are obtained in the particular cases. The numerical results validating the theoretical estimates are discussed.
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A
2016-08-12
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.
2016-01-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908
Ciulla, Carlo; Veljanovski, Dimitar; Rechkoska Shikoska, Ustijana; Risteski, Filip A
2015-11-01
This research presents signal-image post-processing techniques called Intensity-Curvature Measurement Approaches with application to the diagnosis of human brain tumors detected through Magnetic Resonance Imaging (MRI). Post-processing of the MRI of the human brain encompasses the following model functions: (i) bivariate cubic polynomial, (ii) bivariate cubic Lagrange polynomial, (iii) monovariate sinc, and (iv) bivariate linear. The following Intensity-Curvature Measurement Approaches were used: (i) classic-curvature, (ii) signal resilient to interpolation, (iii) intensity-curvature measure and (iv) intensity-curvature functional. The results revealed that the classic-curvature, the signal resilient to interpolation and the intensity-curvature functional are able to add additional information useful to the diagnosis carried out with MRI. The contribution to the MRI diagnosis of our study are: (i) the enhanced gray level scale of the tumor mass and the well-behaved representation of the tumor provided through the signal resilient to interpolation, and (ii) the visually perceptible third dimension perpendicular to the image plane provided through the classic-curvature and the intensity-curvature functional.
Ciulla, Carlo; Veljanovski, Dimitar; Rechkoska Shikoska, Ustijana; Risteski, Filip A.
2015-01-01
This research presents signal-image post-processing techniques called Intensity-Curvature Measurement Approaches with application to the diagnosis of human brain tumors detected through Magnetic Resonance Imaging (MRI). Post-processing of the MRI of the human brain encompasses the following model functions: (i) bivariate cubic polynomial, (ii) bivariate cubic Lagrange polynomial, (iii) monovariate sinc, and (iv) bivariate linear. The following Intensity-Curvature Measurement Approaches were used: (i) classic-curvature, (ii) signal resilient to interpolation, (iii) intensity-curvature measure and (iv) intensity-curvature functional. The results revealed that the classic-curvature, the signal resilient to interpolation and the intensity-curvature functional are able to add additional information useful to the diagnosis carried out with MRI. The contribution to the MRI diagnosis of our study are: (i) the enhanced gray level scale of the tumor mass and the well-behaved representation of the tumor provided through the signal resilient to interpolation, and (ii) the visually perceptible third dimension perpendicular to the image plane provided through the classic-curvature and the intensity-curvature functional. PMID:26644943
A general-purpose optimization program for engineering design
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.; Sugimoto, H.
1986-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis) is a FORTRAN program for nonlinear constrained (or unconstrained) function minimization. The optimization process is segmented into three levels: Strategy, Optimizer, and One-dimensional search. At each level, several options are available so that a total of nearly 100 possible combinations can be created. An example of available combinations is the Augmented Lagrange Multiplier method, using the BFGS variable metric unconstrained minimization together with polynomial interpolation for the one-dimensional search.
Interpolation by new B-splines on a four directional mesh of the plane
NASA Astrophysics Data System (ADS)
Nouisser, O.; Sbibih, D.
2004-01-01
In this paper we construct new simple and composed B-splines on the uniform four directional mesh of the plane, in order to improve the approximation order of B-splines studied in Sablonniere (in: Program on Spline Functions and the Theory of Wavelets, Proceedings and Lecture Notes, Vol. 17, University of Montreal, 1998, pp. 67-78). If φ is such a simple B-spline, we first determine the space of polynomials with maximal total degree included in , and we prove some results concerning the linear independence of the family . Next, we show that the cardinal interpolation with φ is correct and we study in S(φ) a Lagrange interpolation problem. Finally, we define composed B-splines by repeated convolution of φ with the characteristic functions of a square or a lozenge, and we give some of their properties.
Nonlinear Viscoelastic Analysis of Orthotropic Beams Using a General Third-Order Theory
2012-06-20
Kirchhoff stress tensor, denoted by r and reduced strain tensor E e is given by rxx rzz rxz 8><>: 9>=>;¼ Q11ð0Þ Q13ð0Þ 0 Q13ð0Þ Q33ð0Þ 0 0 0 Q55ð0Þ...0.5 1 −0.5 0 0.5 1 (a) (b) Fig. 1. Graphs of (a) equi-spaced and (b) spectral lagrange interpolation functions for polynomial order of p = 11. 3762 V
Design of an essentially non-oscillatory reconstruction procedure in finite-element type meshes
NASA Technical Reports Server (NTRS)
Abgrall, Remi
1992-01-01
An essentially non oscillatory reconstruction for functions defined on finite element type meshes is designed. Two related problems are studied: the interpolation of possibly unsmooth multivariate functions on arbitary meshes and the reconstruction of a function from its averages in the control volumes surrounding the nodes of the mesh. Concerning the first problem, the behavior of the highest coefficients of two polynomial interpolations of a function that may admit discontinuities of locally regular curves is studied: the Lagrange interpolation and an approximation such that the mean of the polynomial on any control volume is equal to that of the function to be approximated. This enables the best stencil for the approximation to be chosen. The choice of the smallest possible number of stencils is addressed. Concerning the reconstruction problem, two methods were studied: one based on an adaptation of the so called reconstruction via deconvolution method to irregular meshes and one that lies on the approximation on the mean as defined above. The first method is conservative up to a quadrature formula and the second one is exactly conservative. The two methods have the expected order of accuracy, but the second one is much less expensive than the first one. Some numerical examples are given which demonstrate the efficiency of the reconstruction.
On the error propagation of semi-Lagrange and Fourier methods for advection problems☆
Einkemmer, Lukas; Ostermann, Alexander
2015-01-01
In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley–Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme. PMID:25844018
Quantitative Tomography for Continuous Variable Quantum Systems
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.
2018-03-01
We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.
2014-04-01
The CG and DG horizontal discretization employs high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto- Legendre ...and DG horizontal discretization employs high-order nodal basis functions 29 associated with Lagrange polynomials based on Gauss-Lobatto- Legendre ...Inside 235 each element we build ( 1)N + Gauss-Lobatto- Legendre (GLL) quadrature points, where N 236 indicate the polynomial order of the basis
Cubature versus Fekete-Gauss nodes for spectral element methods on simplicial meshes
NASA Astrophysics Data System (ADS)
Pasquetti, Richard; Rapetti, Francesca
2017-10-01
In a recent JCP paper [9], a higher order triangular spectral element method (TSEM) is proposed to address seismic wave field modeling. The main interest of this TSEM is that the mass matrix is diagonal, so that an explicit time marching becomes very cheap. This property results from the fact that, similarly to the usual SEM (say QSEM), the basis functions are Lagrange polynomials based on a set of points that shows both nice interpolation and quadrature properties. In the quadrangle, i.e. for the QSEM, the set of points is simply obtained by tensorial product of Gauss-Lobatto-Legendre (GLL) points. In the triangle, finding such an appropriate set of points is however not trivial. Thus, the work of [9] follows anterior works that started in 2000's [2,6,11] and now provides cubature nodes and weights up to N = 9, where N is the total degree of the polynomial approximation. Here we wish to evaluate the accuracy of this cubature nodes TSEM with respect to the Fekete-Gauss one, see e.g.[12], that makes use of two sets of points, namely the Fekete points and the Gauss points of the triangle for interpolation and quadrature, respectively. Because the Fekete-Gauss TSEM is in the spirit of any nodal hp-finite element methods, one may expect that the conclusions of this Note will remain relevant if using other sets of carefully defined interpolation points.
Necessary conditions for weighted mean convergence of Lagrange interpolation for exponential weights
NASA Astrophysics Data System (ADS)
Damelin, S. B.; Jung, H. S.; Kwon, K. H.
2001-07-01
Given a continuous real-valued function f which vanishes outside a fixed finite interval, we establish necessary conditions for weighted mean convergence of Lagrange interpolation for a general class of even weights w which are of exponential decay on the real line or at the endpoints of (-1,1).
Approximating exponential and logarithmic functions using polynomial interpolation
NASA Astrophysics Data System (ADS)
Gordon, Sheldon P.; Yang, Yajun
2017-04-01
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.
Performance of Statistical Temporal Downscaling Techniques of Wind Speed Data Over Aegean Sea
NASA Astrophysics Data System (ADS)
Gokhan Guler, Hasan; Baykal, Cuneyt; Ozyurt, Gulizar; Kisacik, Dogan
2016-04-01
Wind speed data is a key input for many meteorological and engineering applications. Many institutions provide wind speed data with temporal resolutions ranging from one hour to twenty four hours. Higher temporal resolution is generally required for some applications such as reliable wave hindcasting studies. One solution to generate wind data at high sampling frequencies is to use statistical downscaling techniques to interpolate values of the finer sampling intervals from the available data. In this study, the major aim is to assess temporal downscaling performance of nine statistical interpolation techniques by quantifying the inherent uncertainty due to selection of different techniques. For this purpose, hourly 10-m wind speed data taken from 227 data points over Aegean Sea between 1979 and 2010 having a spatial resolution of approximately 0.3 degrees are analyzed from the National Centers for Environmental Prediction (NCEP) The Climate Forecast System Reanalysis database. Additionally, hourly 10-m wind speed data of two in-situ measurement stations between June, 2014 and June, 2015 are considered to understand effect of dataset properties on the uncertainty generated by interpolation technique. In this study, nine statistical interpolation techniques are selected as w0 (left constant) interpolation, w6 (right constant) interpolation, averaging step function interpolation, linear interpolation, 1D Fast Fourier Transform interpolation, 2nd and 3rd degree Lagrange polynomial interpolation, cubic spline interpolation, piecewise cubic Hermite interpolating polynomials. Original data is down sampled to 6 hours (i.e. wind speeds at 0th, 6th, 12th and 18th hours of each day are selected), then 6 hourly data is temporally downscaled to hourly data (i.e. the wind speeds at each hour between the intervals are computed) using nine interpolation technique, and finally original data is compared with the temporally downscaled data. A penalty point system based on coefficient of variation root mean square error, normalized mean absolute error, and prediction skill is selected to rank nine interpolation techniques according to their performance. Thus, error originated from the temporal downscaling technique is quantified which is an important output to determine wind and wave modelling uncertainties, and the performance of these techniques are demonstrated over Aegean Sea indicating spatial trends and discussing relevance to data type (i.e. reanalysis data or in-situ measurements). Furthermore, bias introduced by the best temporal downscaling technique is discussed. Preliminary results show that overall piecewise cubic Hermite interpolating polynomials have the highest performance to temporally downscale wind speed data for both reanalysis data and in-situ measurements over Aegean Sea. However, it is observed that cubic spline interpolation performs much better along Aegean coastline where the data points are close to the land. Acknowledgement: This research was partly supported by TUBITAK Grant number 213M534 according to Turkish Russian Joint research grant with RFBR and the CoCoNET (Towards Coast to Coast Network of Marine Protected Areas Coupled by Wİnd Energy Potential) project funded by European Union FP7/2007-2013 program.
2013-01-01
is the derivative of the N th-order Legendre polynomial . Given these definitions, the one-dimensional Lagrange polynomials hi(ξ) are hi(ξ) = − 1 N(N...2. Detail of one interface patch in the northern hemisphere. The high-order Legendre -Gauss-Lobatto (LGL) points are added to the linear grid by...smaller ones by a Lagrange polynomial of order nI . The number of quadrilateral elements and grid points of the final grid are then given by Np = 6(N
An Introduction to Lagrangian Differential Calculus.
ERIC Educational Resources Information Center
Schremmer, Francesca; Schremmer, Alain
1990-01-01
Illustrates how Lagrange's approach applies to the differential calculus of polynomial functions when approximations are obtained. Discusses how to obtain polynomial approximations in other cases. (YP)
Fractional spectral and pseudo-spectral methods in unbounded domains: Theory and applications
NASA Astrophysics Data System (ADS)
Khosravian-Arab, Hassan; Dehghan, Mehdi; Eslahchi, M. R.
2017-06-01
This paper is intended to provide exponentially accurate Galerkin, Petrov-Galerkin and pseudo-spectral methods for fractional differential equations on a semi-infinite interval. We start our discussion by introducing two new non-classical Lagrange basis functions: NLBFs-1 and NLBFs-2 which are based on the two new families of the associated Laguerre polynomials: GALFs-1 and GALFs-2 obtained recently by the authors in [28]. With respect to the NLBFs-1 and NLBFs-2, two new non-classical interpolants based on the associated- Laguerre-Gauss and Laguerre-Gauss-Radau points are introduced and then fractional (pseudo-spectral) differentiation (and integration) matrices are derived. Convergence and stability of the new interpolants are proved in detail. Several numerical examples are considered to demonstrate the validity and applicability of the basis functions to approximate fractional derivatives (and integrals) of some functions. Moreover, the pseudo-spectral, Galerkin and Petrov-Galerkin methods are successfully applied to solve some physical ordinary differential equations of either fractional orders or integer ones. Some useful comments from the numerical point of view on Galerkin and Petrov-Galerkin methods are listed at the end.
Biomimetics of throwing at basketball
NASA Astrophysics Data System (ADS)
Merticaru, E.; Budescu, E.; Iacob, R. M.
2016-08-01
The paper deals with the inverse dynamics of a kinematic chain of the human upper limb when throwing the ball at the basketball, aiming to calculate the torques required to put in action the technical system. The kinematic chain respects the anthropometric features regarding the length and mass of body segments. The kinematic parameters of the motion were determined by measuring the angles of body segments during a succession of filmed pictures of a throw, and the interpolation of these values and determination of the interpolating polynomials for each independent geometric coordinate. Using the Lagrange equations, there were determined the variations with time of the required torques to put in motion the kinematic chain of the type of triple physical pendulum. The obtained values show, naturally, the fact that the biggest torque is that for mimetic articulation of the shoulder, being comparable with those obtained by the brachial biceps muscle of the analyzed human subject. Using the obtained data, there can be conceived the mimetic technical system, of robotic type, with application in sports, so that to perform the motion of ball throwing, from steady position, at the basket.
A Linear Algebraic Approach to Teaching Interpolation
ERIC Educational Resources Information Center
Tassa, Tamir
2007-01-01
A novel approach for teaching interpolation in the introductory course in numerical analysis is presented. The interpolation problem is viewed as a problem in linear algebra, whence the various forms of interpolating polynomial are seen as different choices of a basis to the subspace of polynomials of the corresponding degree. This approach…
High degree interpolation polynomial in Newton form
NASA Technical Reports Server (NTRS)
Tal-Ezer, Hillel
1988-01-01
Polynomial interpolation is an essential subject in numerical analysis. Dealing with a real interval, it is well known that even if f(x) is an analytic function, interpolating at equally spaced points can diverge. On the other hand, interpolating at the zeroes of the corresponding Chebyshev polynomial will converge. Using the Newton formula, this result of convergence is true only on the theoretical level. It is shown that the algorithm which computes the divided differences is numerically stable only if: (1) the interpolating points are arranged in a different order, and (2) the size of the interval is 4.
Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Yang, Yajun
2017-01-01
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…
Hermite-Birkhoff interpolation in the nth roots of unity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavaretta, A.S. Jr.; Sharma, A.; Varga, R.S.
1980-06-01
Consider, as nodes for polynomial interpolation, the nth roots of unity. For a sufficiently smooth function f(z), we require a polynomial p(z) to interpolate f and certain of its derivatives at each node. It is shown that the so-called Polya conditions, which are necessary for unique interpolation, are in this setting also sufficient.
Empirical performance of interpolation techniques in risk-neutral density (RND) estimation
NASA Astrophysics Data System (ADS)
Bahaludin, H.; Abdullah, M. H.
2017-03-01
The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.
Unconventional Hamilton-type variational principle in phase space and symplectic algorithm
NASA Astrophysics Data System (ADS)
Luo, En; Huang, Weijiang; Zhang, Hexin
2003-06-01
By a novel approach proposed by Luo, the unconventional Hamilton-type variational principle in phase space for elastodynamics of multidegree-of-freedom system is established in this paper. It not only can fully characterize the initial-value problem of this dynamic, but also has a natural symplectic structure. Based on this variational principle, a symplectic algorithm which is called a symplectic time-subdomain method is proposed. A non-difference scheme is constructed by applying Lagrange interpolation polynomial to the time subdomain. Furthermore, it is also proved that the presented symplectic algorithm is an unconditionally stable one. From the results of the two numerical examples of different types, it can be seen that the accuracy and the computational efficiency of the new method excel obviously those of widely used Wilson-θ and Newmark-β methods. Therefore, this new algorithm is a highly efficient one with better computational performance.
ADS: A FORTRAN program for automated design synthesis: Version 1.10
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1985-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.
A Stochastic Mixed Finite Element Heterogeneous Multiscale Method for Flow in Porous Media
2010-08-01
applicable for flow in porous media has drawn significant interest in the last few years. Several techniques like generalized polynomial chaos expansions (gPC...represents the stochastic solution as a polynomial approxima- tion. This interpolant is constructed via independent function calls to the de- terministic...of orthogonal polynomials [34,38] or sparse grid approximations [39–41]. It is well known that the global polynomial interpolation cannot resolve lo
Interpolation Hermite Polynomials For Finite Element Method
NASA Astrophysics Data System (ADS)
Gusev, Alexander; Vinitsky, Sergue; Chuluunbaatar, Ochbadrakh; Chuluunbaatar, Galmandakh; Gerdt, Vladimir; Derbov, Vladimir; Góźdź, Andrzej; Krassovitskiy, Pavel
2018-02-01
We describe a new algorithm for analytic calculation of high-order Hermite interpolation polynomials of the simplex and give their classification. A typical example of triangle element, to be built in high accuracy finite element schemes, is given.
Polynomial Interpolation and Sums of Powers of Integers
ERIC Educational Resources Information Center
Cereceda, José Luis
2017-01-01
In this note, we revisit the problem of polynomial interpolation and explicitly construct two polynomials in n of degree k + 1, P[subscript k](n) and Q[subscript k](n), such that P[subscript k](n) = Q[subscript k](n) = f[subscript k](n) for n = 1, 2,… , k, where f[subscript k](1), f[subscript k](2),… , f[subscript k](k) are k arbitrarily chosen…
NASA Astrophysics Data System (ADS)
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.
NASA Astrophysics Data System (ADS)
Zhang, J.; Gao, Q.; Tan, S. J.; Zhong, W. X.
2012-10-01
A new method is proposed as a solution for the large-scale coupled vehicle-track dynamic model with nonlinear wheel-rail contact. The vehicle is simplified as a multi-rigid-body model, and the track is treated as a three-layer beam model. In the track model, the rail is assumed to be an Euler-Bernoulli beam supported by discrete sleepers. The vehicle model and the track model are coupled using Hertzian nonlinear contact theory, and the contact forces of the vehicle subsystem and the track subsystem are approximated by the Lagrange interpolation polynomial. The response of the large-scale coupled vehicle-track model is calculated using the precise integration method. A more efficient algorithm based on the periodic property of the track is applied to calculate the exponential matrix and certain matrices related to the solution of the track subsystem. Numerical examples demonstrate the computational accuracy and efficiency of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Degroote, M.; Henderson, T. M.; Zhao, J.
We present a similarity transformation theory based on a polynomial form of a particle-hole pair excitation operator. In the weakly correlated limit, this polynomial becomes an exponential, leading to coupled cluster doubles. In the opposite strongly correlated limit, the polynomial becomes an extended Bessel expansion and yields the projected BCS wavefunction. In between, we interpolate using a single parameter. The e ective Hamiltonian is non-hermitian and this Polynomial Similarity Transformation Theory follows the philosophy of traditional coupled cluster, left projecting the transformed Hamiltonian onto subspaces of the Hilbert space in which the wave function variance is forced to be zero.more » Similarly, the interpolation parameter is obtained through minimizing the next residual in the projective hierarchy. We rationalize and demonstrate how and why coupled cluster doubles is ill suited to the strongly correlated limit whereas the Bessel expansion remains well behaved. The model provides accurate wave functions with energy errors that in its best variant are smaller than 1% across all interaction stengths. The numerical cost is polynomial in system size and the theory can be straightforwardly applied to any realistic Hamiltonian.« less
Quadratic polynomial interpolation on triangular domain
NASA Astrophysics Data System (ADS)
Li, Ying; Zhang, Congcong; Yu, Qian
2018-04-01
In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.
Bayer Demosaicking with Polynomial Interpolation.
Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil
2016-08-30
Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.
Accurate Estimation of Solvation Free Energy Using Polynomial Fitting Techniques
Shyu, Conrad; Ytreberg, F. Marty
2010-01-01
This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem 30: 2297–2304, 2009). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and non-equidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest these polynomial techniques, especially with use of non-equidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. PMID:20623657
Interpolation problem for the solutions of linear elasticity equations based on monogenic functions
NASA Astrophysics Data System (ADS)
Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii
2017-11-01
Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.
Interpolation and Polynomial Curve Fitting
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2014-01-01
Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maginot, P. G.; Ragusa, J. C.; Morel, J. E.
2013-07-01
We examine several possible methods of mass matrix lumping for discontinuous finite element discrete ordinates transport using a Lagrange interpolatory polynomial trial space. Though positive outflow angular flux is guaranteed with traditional mass matrix lumping in a purely absorbing 1-D slab cell for the linear discontinuous approximation, we show that when used with higher degree interpolatory polynomial trial spaces, traditional lumping does yield strictly positive outflows and does not increase in accuracy with an increase in trial space polynomial degree. As an alternative, we examine methods which are 'self-lumping'. Self-lumping methods yield diagonal mass matrices by using numerical quadrature restrictedmore » to the Lagrange interpolatory points. Using equally-spaced interpolatory points, self-lumping is achieved through the use of closed Newton-Cotes formulas, resulting in strictly positive outflows in pure absorbers for odd power polynomials in 1-D slab geometry. By changing interpolatory points from the traditional equally-spaced points to the quadrature points of the Gauss-Legendre or Lobatto-Gauss-Legendre quadratures, it is possible to generate solution representations with a diagonal mass matrix and a strictly positive outflow for any degree polynomial solution representation in a pure absorber medium in 1-D slab geometry. Further, there is no inherent limit to local truncation error order of accuracy when using interpolatory points that correspond to the quadrature points of high order accuracy numerical quadrature schemes. (authors)« less
Normal modes of the shallow water system on the cubed sphere
NASA Astrophysics Data System (ADS)
Kang, H. G.; Cheong, H. B.; Lee, C. H.
2017-12-01
Spherical harmonics expressed as the Rossby-Haurwitz waves are the normal modes of non-divergent barotropic model. Among the normal modes in the numerical models, the most unstable mode will contaminate the numerical results, and therefore the investigation of normal mode for a given grid system and a discretiztaion method is important. The cubed-sphere grid which consists of six identical faces has been widely adopted in many atmospheric models. This grid system is non-orthogonal grid so that calculation of the normal mode is quiet challenge problem. In the present study, the normal modes of the shallow water system on the cubed sphere discretized by the spectral element method employing the Gauss-Lobatto Lagrange interpolating polynomials as orthogonal basis functions is investigated. The algebraic equations for the shallow water equation on the cubed sphere are derived, and the huge global matrix is constructed. The linear system representing the eigenvalue-eigenvector relations is solved by numerical libraries. The normal mode calculated for the several horizontal resolution and lamb parameters will be discussed and compared to the normal mode from the spherical harmonics spectral method.
Polynomial interpolation and sums of powers of integers
NASA Astrophysics Data System (ADS)
Cereceda, José Luis
2017-02-01
In this note, we revisit the problem of polynomial interpolation and explicitly construct two polynomials in n of degree k + 1, Pk(n) and Qk(n), such that Pk(n) = Qk(n) = fk(n) for n = 1, 2,… , k, where fk(1), fk(2),… , fk(k) are k arbitrarily chosen (real or complex) values. Then, we focus on the case that fk(n) is given by the sum of powers of the first n positive integers Sk(n) = 1k + 2k + ṡṡṡ + nk, and show that Sk(n) admits the polynomial representations Sk(n) = Pk(n) and Sk(n) = Qk(n) for all n = 1, 2,… , and k ≥ 1, where the first representation involves the Eulerian numbers, and the second one the Stirling numbers of the second kind. Finally, we consider yet another polynomial formula for Sk(n) alternative to the well-known formula of Bernoulli.
A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.; Watson, Layne T.
1998-01-01
Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.
The construction of high-accuracy schemes for acoustic equations
NASA Technical Reports Server (NTRS)
Tang, Lei; Baeder, James D.
1995-01-01
An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.
NASA Technical Reports Server (NTRS)
Chang, T. S.
1974-01-01
A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.
Student Support for Research in Hierarchical Control and Trajectory Planning
NASA Technical Reports Server (NTRS)
Martin, Clyde F.
1999-01-01
Generally, classical polynomial splines tend to exhibit unwanted undulations. In this work, we discuss a technique, based on control principles, for eliminating these undulations and increasing the smoothness properties of the spline interpolants. We give a generalization of the classical polynomial splines and show that this generalization is, in fact, a family of splines that covers the broad spectrum of polynomial, trigonometric and exponential splines. A particular element in this family is determined by the appropriate control data. It is shown that this technique is easy to implement. Several numerical and curve-fitting examples are given to illustrate the advantages of this technique over the classical approach. Finally, we discuss the convergence properties of the interpolant.
Bi-cubic interpolation for shift-free pan-sharpening
NASA Astrophysics Data System (ADS)
Aiazzi, Bruno; Baronti, Stefano; Selva, Massimo; Alparone, Luciano
2013-12-01
Most of pan-sharpening techniques require the re-sampling of the multi-spectral (MS) image for matching the size of the panchromatic (Pan) image, before the geometric details of Pan are injected into the MS image. This operation is usually performed in a separable fashion by means of symmetric digital low-pass filtering kernels with odd lengths that utilize piecewise local polynomials, typically implementing linear or cubic interpolation functions. Conversely, constant, i.e. nearest-neighbour, and quadratic kernels, implementing zero and two degree polynomials, respectively, introduce shifts in the magnified images, that are sub-pixel in the case of interpolation by an even factor, as it is the most usual case. However, in standard satellite systems, the point spread functions (PSF) of the MS and Pan instruments are centered in the middle of each pixel. Hence, commercial MS and Pan data products, whose scale ratio is an even number, are relatively shifted by an odd number of half pixels. Filters of even lengths may be exploited to compensate the half-pixel shifts between the MS and Pan sampling grids. In this paper, it is shown that separable polynomial interpolations of odd degrees are feasible with linear-phase kernels of even lengths. The major benefit is that bi-cubic interpolation, which is known to represent the best trade-off between performances and computational complexity, can be applied to commercial MS + Pan datasets, without the need of performing a further half-pixel registration after interpolation, to align the expanded MS with the Pan image.
2013-08-01
release; distribution unlimited. PA Number 412-TW-PA-13395 f generic function g acceleration due to gravity h altitude L aerodynamic lift force L Lagrange...cost m vehicle mass M Mach number n number of coefficients in polynomial regression p highest order of polynomial regression Q dynamic pressure R...Method (RPM); the collocation points are defined by the roots of Legendre -Gauss- Radau (LGR) functions.9 GPOPS also automatically refines the “mesh” by
NASA Astrophysics Data System (ADS)
Ohmer, Marc; Liesch, Tanja; Goeppert, Nadine; Goldscheider, Nico
2017-11-01
The selection of the best possible method to interpolate a continuous groundwater surface from point data of groundwater levels is a controversial issue. In the present study four deterministic and five geostatistical interpolation methods (global polynomial interpolation, local polynomial interpolation, inverse distance weighting, radial basis function, simple-, ordinary-, universal-, empirical Bayesian and co-Kriging) and six error statistics (ME, MAE, MAPE, RMSE, RMSSE, Pearson R) were examined for a Jurassic karst aquifer and a Quaternary alluvial aquifer. We investigated the possible propagation of uncertainty of the chosen interpolation method on the calculation of the estimated vertical groundwater exchange between the aquifers. Furthermore, we validated the results with eco-hydrogeological data including the comparison between calculated groundwater depths and geographic locations of karst springs, wetlands and surface waters. These results show, that calculated inter-aquifer exchange rates based on different interpolations of groundwater potentials may vary greatly depending on the chosen interpolation method (by factor >10). Therefore, the choice of an interpolation method should be made with care, taking different error measures as well as additional data for plausibility control into account. The most accurate results have been obtained with co-Kriging incorporating secondary data (e.g. topography, river levels).
Efficient Craig Interpolation for Linear Diophantine (Dis)Equations and Linear Modular Equations
2008-02-01
Craig interpolants has enabled the development of powerful hardware and software model checking techniques. Efficient algorithms are known for computing...interpolants in rational and real linear arithmetic. We focus on subsets of integer linear arithmetic. Our main results are polynomial time algorithms ...congruences), and linear diophantine disequations. We show the utility of the proposed interpolation algorithms for discovering modular/divisibility predicates
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.
Scientific data interpolation with low dimensional manifold model
NASA Astrophysics Data System (ADS)
Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley
2018-01-01
We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.
Elevation data fitting and precision analysis of Google Earth in road survey
NASA Astrophysics Data System (ADS)
Wei, Haibin; Luan, Xiaohan; Li, Hanchao; Jia, Jiangkun; Chen, Zhao; Han, Leilei
2018-05-01
Objective: In order to improve efficiency of road survey and save manpower and material resources, this paper intends to apply Google Earth to the feasibility study stage of road survey and design. Limited by the problem that Google Earth elevation data lacks precision, this paper is focused on finding several different fitting or difference methods to improve the data precision, in order to make every effort to meet the accuracy requirements of road survey and design specifications. Method: On the basis of elevation difference of limited public points, any elevation difference of the other points can be fitted or interpolated. Thus, the precise elevation can be obtained by subtracting elevation difference from the Google Earth data. Quadratic polynomial surface fitting method, cubic polynomial surface fitting method, V4 interpolation method in MATLAB and neural network method are used in this paper to process elevation data of Google Earth. And internal conformity, external conformity and cross correlation coefficient are used as evaluation indexes to evaluate the data processing effect. Results: There is no fitting difference at the fitting point while using V4 interpolation method. Its external conformity is the largest and the effect of accuracy improvement is the worst, so V4 interpolation method is ruled out. The internal and external conformity of the cubic polynomial surface fitting method both are better than those of the quadratic polynomial surface fitting method. The neural network method has a similar fitting effect with the cubic polynomial surface fitting method, but its fitting effect is better in the case of a higher elevation difference. Because the neural network method is an unmanageable fitting model, the cubic polynomial surface fitting method should be mainly used and the neural network method can be used as the auxiliary method in the case of higher elevation difference. Conclusions: Cubic polynomial surface fitting method can obviously improve data precision of Google Earth. The error of data in hilly terrain areas meets the requirement of specifications after precision improvement and it can be used in feasibility study stage of road survey and design.
NASA Astrophysics Data System (ADS)
Lambrecht, L.; Lamert, A.; Friederich, W.; Möller, T.; Boxberg, M. S.
2018-03-01
A nodal discontinuous Galerkin (NDG) approach is developed and implemented for the computation of viscoelastic wavefields in complex geological media. The NDG approach combines unstructured tetrahedral meshes with an element-wise, high-order spatial interpolation of the wavefield based on Lagrange polynomials. Numerical fluxes are computed from an exact solution of the heterogeneous Riemann problem. Our implementation offers capabilities for modelling viscoelastic wave propagation in 1-D, 2-D and 3-D settings of very different spatial scale with little logistical overhead. It allows the import of external tetrahedral meshes provided by independent meshing software and can be run in a parallel computing environment. Computation of adjoint wavefields and an interface for the computation of waveform sensitivity kernels are offered. The method is validated in 2-D and 3-D by comparison to analytical solutions and results from a spectral element method. The capabilities of the NDG method are demonstrated through a 3-D example case taken from tunnel seismics which considers high-frequency elastic wave propagation around a curved underground tunnel cutting through inclined and faulted sedimentary strata. The NDG method was coded into the open-source software package NEXD and is available from GitHub.
NASA Astrophysics Data System (ADS)
Park, Sungkyung; Park, Chester Sungchung
2018-03-01
A composite radio receiver back-end and digital front-end, made up of a delta-sigma analogue-to-digital converter (ADC) with a high-speed low-noise sampling clock generator, and a fractional sample rate converter (FSRC), is proposed and designed for a multi-mode reconfigurable radio. The proposed radio receiver architecture contributes to saving the chip area and thus lowering the design cost. To enable inter-radio access technology handover and ultimately software-defined radio reception, a reconfigurable radio receiver consisting of a multi-rate ADC with its sampling clock derived from a local oscillator, followed by a rate-adjustable FSRC for decimation, is designed. Clock phase noise and timing jitter are examined to support the effectiveness of the proposed radio receiver. A FSRC is modelled and simulated with a cubic polynomial interpolator based on Lagrange method, and its spectral-domain view is examined in order to verify its effect on aliasing, nonlinearity and signal-to-noise ratio, giving insight into the design of the decimation chain. The sampling clock path and the radio receiver back-end data path are designed in a 90-nm CMOS process technology with 1.2V supply.
Axisymmetric solid elements by a rational hybrid stress method
NASA Technical Reports Server (NTRS)
Tian, Z.; Pian, T. H. H.
1985-01-01
Four-node axisymmetric solid elements are derived by a new version of hybrid method for which the assumed stresses are expressed in complete polynomials in natural coordinates. The stress equilibrium conditions are introduced through the use of additional displacements as Lagrange multipliers. A rational procedure is to choose the displacement terms such that the resulting strains are also of complete polynomials of the same order. Example problems all indicate that elements obtained by this procedure lead to better results in displacements and stresses than that by other finite elements.
Sensitivity Analysis of the Static Aeroelastic Response of a Wing
NASA Technical Reports Server (NTRS)
Eldred, Lloyd B.
1993-01-01
A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.
Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong
2016-01-01
Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.
NASA Astrophysics Data System (ADS)
Zhang, Fan; Liu, Pinkuan
2018-04-01
In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.
Scientific data interpolation with low dimensional manifold model
Zhu, Wei; Wang, Bao; Barnard, Richard C.; ...
2017-09-28
Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less
Scientific data interpolation with low dimensional manifold model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Wei; Wang, Bao; Barnard, Richard C.
Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less
Comparison of volatility function technique for risk-neutral densities estimation
NASA Astrophysics Data System (ADS)
Bahaludin, Hafizah; Abdullah, Mimi Hafizah
2017-08-01
Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.
Higher order derivatives of R-Jacobi polynomials
NASA Astrophysics Data System (ADS)
Das, Sourav; Swaminathan, A.
2016-06-01
In this work, the R-Jacobi polynomials defined on the nonnegative real axis related to F-distribution are considered. Using their Sturm-Liouville system higher order derivatives are constructed. Orthogonality property of these higher ordered R-Jacobi polynomials are obtained besides their normal form, self-adjoint form and hypergeometric representation. Interesting results on the Interpolation formula and Gaussian quadrature formulae are obtained with numerical examples.
Real-Time Curvature Defect Detection on Outer Surfaces Using Best-Fit Polynomial Interpolation
Golkar, Ehsan; Prabuwono, Anton Satria; Patel, Ahmed
2012-01-01
This paper presents a novel, real-time defect detection system, based on a best-fit polynomial interpolation, that inspects the conditions of outer surfaces. The defect detection system is an enhanced feature extraction method that employs this technique to inspect the flatness, waviness, blob, and curvature faults of these surfaces. The proposed method has been performed, tested, and validated on numerous pipes and ceramic tiles. The results illustrate that the physical defects such as abnormal, popped-up blobs are recognized completely, and that flames, waviness, and curvature faults are detected simultaneously. PMID:23202186
Homogenous polynomially parameter-dependent H∞ filter designs of discrete-time fuzzy systems.
Zhang, Huaguang; Xie, Xiangpeng; Tong, Shaocheng
2011-10-01
This paper proposes a novel H(∞) filtering technique for a class of discrete-time fuzzy systems. First, a novel kind of fuzzy H(∞) filter, which is homogenous polynomially parameter dependent on membership functions with an arbitrary degree, is developed to guarantee the asymptotic stability and a prescribed H(∞) performance of the filtering error system. Second, relaxed conditions for H(∞) performance analysis are proposed by using a new fuzzy Lyapunov function and the Finsler lemma with homogenous polynomial matrix Lagrange multipliers. Then, based on a new kind of slack variable technique, relaxed linear matrix inequality-based H(∞) filtering conditions are proposed. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed approach.
Trajectory Optimization for Helicopter Unmanned Aerial Vehicles (UAVs)
2010-06-01
the Nth-order derivative of the Legendre Polynomial ( )NL t . Using this method, the range of integration is transformed universally to [-1,+1...which is the interval for Legendre Polynomials . Although the LGL interpolation points are not evenly spaced, they are symmetric about the midpoint 0...the vehicle’s kinematic constraints are parameterized in terms of polynomials of sufficient order, (2) A collision-free criterion is developed and
Radial Basis Function Based Quadrature over Smooth Surfaces
2016-03-24
Radial Basis Functions φ(r) Piecewise Smooth (Conditionally Positive Definite) MN Monomial |r|2m+1 TPS thin plate spline |r|2mln|r| Infinitely Smooth...smooth surfaces using polynomial interpolants, while [27] couples Thin - Plate Spline interpolation (see table 1) with Green’s integral formula [29
2011-01-01
present performance statistics to explain the scalability behavior. Keywords-atmospheric models, time intergrators , MPI, scal- ability, performance; I...across inter-element bound- aries. Basis functions are constructed as tensor products of Lagrange polynomials ψi (x) = hα(ξ) ⊗ hβ(η) ⊗ hγ(ζ)., where hα
Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.
Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C
2016-01-01
We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range.
NASA Astrophysics Data System (ADS)
Hu, Shou-Cun; Ji, Jiang-Hui
2017-12-01
In asteroid rendezvous missions, the dynamical environment near an asteroid’s surface should be made clear prior to launch of the mission. However, most asteroids have irregular shapes, which lower the efficiency of calculating their gravitational field by adopting the traditional polyhedral method. In this work, we propose a method to partition the space near an asteroid adaptively along three spherical coordinates and use Chebyshev polynomial interpolation to represent the gravitational acceleration in each cell. Moreover, we compare four different interpolation schemes to obtain the best precision with identical initial parameters. An error-adaptive octree division is combined to improve the interpolation precision near the surface. As an example, we take the typical irregularly-shaped near-Earth asteroid 4179 Toutatis to demonstrate the advantage of this method; as a result, we show that the efficiency can be increased by hundreds to thousands of times with our method. Our results indicate that this method can be applicable to other irregularly-shaped asteroids and can greatly improve the evaluation efficiency.
Nonlinear dynamic analysis and optimal trajectory planning of a high-speed macro-micro manipulator
NASA Astrophysics Data System (ADS)
Yang, Yi-ling; Wei, Yan-ding; Lou, Jun-qiang; Fu, Lei; Zhao, Xiao-wei
2017-09-01
This paper reports the nonlinear dynamic modeling and the optimal trajectory planning for a flexure-based macro-micro manipulator, which is dedicated to the large-scale and high-speed tasks. In particular, a macro- micro manipulator composed of a servo motor, a rigid arm and a compliant microgripper is focused. Moreover, both flexure hinges and flexible beams are considered. By combining the pseudorigid-body-model method, the assumed mode method and the Lagrange equation, the overall dynamic model is derived. Then, the rigid-flexible-coupling characteristics are analyzed by numerical simulations. After that, the microscopic scale vibration excited by the large-scale motion is reduced through the trajectory planning approach. Especially, a fitness function regards the comprehensive excitation torque of the compliant microgripper is proposed. The reference curve and the interpolation curve using the quintic polynomial trajectories are adopted. Afterwards, an improved genetic algorithm is used to identify the optimal trajectory by minimizing the fitness function. Finally, the numerical simulations and experiments validate the feasibility and the effectiveness of the established dynamic model and the trajectory planning approach. The amplitude of the residual vibration reduces approximately 54.9%, and the settling time decreases 57.1%. Therefore, the operation efficiency and manipulation stability are significantly improved.
Pricing and simulation for real estate index options: Radial basis point interpolation
NASA Astrophysics Data System (ADS)
Gong, Pu; Zou, Dong; Wang, Jiayue
2018-06-01
This study employs the meshfree radial basis point interpolation (RBPI) for pricing real estate derivatives contingent on real estate index. This method combines radial and polynomial basis functions, which can guarantee the interpolation scheme with Kronecker property and effectively improve accuracy. An exponential change of variables, a mesh refinement algorithm and the Richardson extrapolation are employed in this study to implement the RBPI. Numerical results are presented to examine the computational efficiency and accuracy of our method.
Jimena: efficient computing and system state identification for genetic regulatory networks.
Karl, Stefan; Dandekar, Thomas
2013-10-11
Boolean networks capture switching behavior of many naturally occurring regulatory networks. For semi-quantitative modeling, interpolation between ON and OFF states is necessary. The high degree polynomial interpolation of Boolean genetic regulatory networks (GRNs) in cellular processes such as apoptosis or proliferation allows for the modeling of a wider range of node interactions than continuous activator-inhibitor models, but suffers from scaling problems for networks which contain nodes with more than ~10 inputs. Many GRNs from literature or new gene expression experiments exceed those limitations and a new approach was developed. (i) As a part of our new GRN simulation framework Jimena we introduce and setup Boolean-tree-based data structures; (ii) corresponding algorithms greatly expedite the calculation of the polynomial interpolation in almost all cases, thereby expanding the range of networks which can be simulated by this model in reasonable time. (iii) Stable states for discrete models are efficiently counted and identified using binary decision diagrams. As application example, we show how system states can now be sampled efficiently in small up to large scale hormone disease networks (Arabidopsis thaliana development and immunity, pathogen Pseudomonas syringae and modulation by cytokinins and plant hormones). Jimena simulates currently available GRNs about 10-100 times faster than the previous implementation of the polynomial interpolation model and even greater gains are achieved for large scale-free networks. This speed-up also facilitates a much more thorough sampling of continuous state spaces which may lead to the identification of new stable states. Mutants of large networks can be constructed and analyzed very quickly enabling new insights into network robustness and behavior.
Classical Dynamics of Fullerenes
NASA Astrophysics Data System (ADS)
Sławianowski, Jan J.; Kotowski, Romuald K.
2017-06-01
The classical mechanics of large molecules and fullerenes is studied. The approach is based on the model of collective motion of these objects. The mixed Lagrangian (material) and Eulerian (space) description of motion is used. In particular, the Green and Cauchy deformation tensors are geometrically defined. The important issue is the group-theoretical approach to describing the affine deformations of the body. The Hamiltonian description of motion based on the Poisson brackets methodology is used. The Lagrange and Hamilton approaches allow us to formulate the mechanics in the canonical form. The method of discretization in analytical continuum theory and in classical dynamics of large molecules and fullerenes enable us to formulate their dynamics in terms of the polynomial expansions of configurations. Another approach is based on the theory of analytical functions and on their approximations by finite-order polynomials. We concentrate on the extremely simplified model of affine deformations or on their higher-order polynomial perturbations.
Luo, Qiang; Yan, Zhuangzhi; Gu, Dongxing; Cao, Lei
This paper proposed an image interpolation algorithm based on bilinear interpolation and a color correction algorithm based on polynomial regression on FPGA, which focused on the limited number of imaging pixels and color distortion of the ultra-thin electronic endoscope. Simulation experiment results showed that the proposed algorithm realized the real-time display of 1280 x 720@60Hz HD video, and using the X-rite color checker as standard colors, the average color difference was reduced about 30% comparing with that before color correction.
Taylor's Theorem: The Elusive "c" Is Not So Elusive
ERIC Educational Resources Information Center
Kreminski, Richard
2010-01-01
For a suitably nice, real-valued function "f" defined on an open interval containing [a,b], f(b) can be expressed as p[subscript n](b) (the nth Taylor polynomial of f centered at a) plus an error term of the (Lagrange) form f[superscript (n+1)](c)(b-a)[superscript (n+1)]/(n+1)! for some c in (a,b). This article is for those who think that not…
NASA Astrophysics Data System (ADS)
Nikooeinejad, Z.; Delavarkhalafi, A.; Heydari, M.
2018-03-01
The difficulty of solving the min-max optimal control problems (M-MOCPs) with uncertainty using generalised Euler-Lagrange equations is caused by the combination of split boundary conditions, nonlinear differential equations and the manner in which the final time is treated. In this investigation, the shifted Jacobi pseudospectral method (SJPM) as a numerical technique for solving two-point boundary value problems (TPBVPs) in M-MOCPs for several boundary states is proposed. At first, a novel framework of approximate solutions which satisfied the split boundary conditions automatically for various boundary states is presented. Then, by applying the generalised Euler-Lagrange equations and expanding the required approximate solutions as elements of shifted Jacobi polynomials, finding a solution of TPBVPs in nonlinear M-MOCPs with uncertainty is reduced to the solution of a system of algebraic equations. Moreover, the Jacobi polynomials are particularly useful for boundary value problems in unbounded domain, which allow us to solve infinite- as well as finite and free final time problems by domain truncation method. Some numerical examples are given to demonstrate the accuracy and efficiency of the proposed method. A comparative study between the proposed method and other existing methods shows that the SJPM is simple and accurate.
Charge-based MOSFET model based on the Hermite interpolation polynomial
NASA Astrophysics Data System (ADS)
Colalongo, Luigi; Richelli, Anna; Kovacs, Zsolt
2017-04-01
An accurate charge-based compact MOSFET model is developed using the third order Hermite interpolation polynomial to approximate the relation between surface potential and inversion charge in the channel. This new formulation of the drain current retains the same simplicity of the most advanced charge-based compact MOSFET models such as BSIM, ACM and EKV, but it is developed without requiring the crude linearization of the inversion charge. Hence, the asymmetry and the non-linearity in the channel are accurately accounted for. Nevertheless, the expression of the drain current can be worked out to be analytically equivalent to BSIM, ACM and EKV. Furthermore, thanks to this new mathematical approach the slope factor is rigorously defined in all regions of operation and no empirical assumption is required.
An integral conservative gridding--algorithm using Hermitian curve interpolation.
Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K
2008-11-07
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Satellite Orbit Theory for a Small Computer.
1983-12-15
them across the pass. . Both sets of interpolating polynomials are finally used to provide osculating orbital elements at arbitrary times during the...polyno-iials are established for themt across the mass. Both sets of inter- polating polynomials are finally used to provide osculating orbital elements ...high Drecisicn orbital elements at epoch, a correspond ing set of initial mean eleme-nts must be determined for the samianalytical model. It is importan
An Interpolation Approach to Optimal Trajectory Planning for Helicopter Unmanned Aerial Vehicles
2012-06-01
Armament Data Line DOF Degree of Freedom PS Pseudospectral LGL Legendre -Gauss-Lobatto quadrature nodes ODE Ordinary Differential Equation xiv...low order polynomials patched together in such away so that the resulting trajectory has several continuous derivatives at all points. In [7], Murray...claims that splines are ideal for optimal control problems because each segment of the spline’s piecewise polynomials approximate the trajectory
Liu, Derek; Sloboda, Ron S
2014-05-01
Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.
Comparison Between Polynomial, Euler Beta-Function and Expo-Rational B-Spline Bases
NASA Astrophysics Data System (ADS)
Kristoffersen, Arnt R.; Dechevsky, Lubomir T.; Laksa˚, Arne; Bang, Børre
2011-12-01
Euler Beta-function B-splines (BFBS) are the practically most important instance of generalized expo-rational B-splines (GERBS) which are not true expo-rational B-splines (ERBS). BFBS do not enjoy the full range of the superproperties of ERBS but, while ERBS are special functions computable by a very rapidly converging yet approximate numerical quadrature algorithms, BFBS are explicitly computable piecewise polynomial (for integer multiplicities), similar to classical Schoenberg B-splines. In the present communication we define, compute and visualize for the first time all possible BFBS of degree up to 3 which provide Hermite interpolation in three consecutive knots of multiplicity up to 3, i.e., the function is being interpolated together with its derivatives of order up to 2. We compare the BFBS obtained for different degrees and multiplicities among themselves and versus the classical Schoenberg polynomial B-splines and the true ERBS for the considered knots. The results of the graphical comparison are discussed from analytical point of view. For the numerical computation and visualization of the new B-splines we have used Maple 12.
Polynomial Expressions for Estimating Elastic Constants From the Resonance of Circular Plates
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Singh, Abhishek
2005-01-01
Two approaches were taken to make convenient spread sheet calculations of elastic constants from resonance data and the tables in ASTM C1259 and E1876: polynomials were fit to the tables; and an automated spread sheet interpolation routine was generated. To compare the approaches, the resonant frequencies of circular plates made of glass, hardened maraging steel, alpha silicon carbide, silicon nitride, tungsten carbide, tape cast NiO-YSZ, and zinc selenide were measured. The elastic constants, as calculated via the polynomials and linear interpolation of the tabular data in ASTM C1259 and E1876, were found comparable for engineering purposes, with the differences typically being less than 0.5 percent. Calculation of additional v values at t/R between 0 and 0.2 would allow better curve fits. This is not necessary for common engineering purposes, however, it might benefit the testing of emerging thin structures such as fuel cell electrolytes, gas conversion membranes, and coatings when Poisson s ratio is less than 0.15 and high precision is needed.
Positivity-preserving High Order Finite Difference WENO Schemes for Compressible Euler Equations
2011-07-15
the WENO reconstruction. We assume that there is a polynomial vector qi(x) = (ρi(x), mi(x), Ei(x)) T with degree k which are (k + 1)-th order accurate...i+ 1 2 = qi(xi+ 1 2 ). The existence of such polynomials can be established by interpolation for WENO schemes. For example, for the fifth or- der...WENO scheme, there is a unique vector of polynomials of degree four qi(x) satisfying qi(xi− 1 2 ) = w+ i− 1 2 , qi(xi+ 1 2 ) = w− i+ 1 2 and 1 ∆x ∫ Ij qi
Transform Decoding of Reed-Solomon Codes. Volume II. Logical Design and Implementation.
1982-11-01
i A. nE aib’ = a(bJ) ; j=0, 1, ... , n-l (2-8) i=01 Similarly, the inverse transform is obtained by interpolation of the polynomial a(z) from its n...with the transform so that either a forward or an inverse transform may be used to encode. The only requirement is that tie reverse of the encoding... inverse transform of the received sequence is the polynomial sum r(z) = e(z) + a(z), where e(z) is the inverse transform of the error polynomial E(z), and a
2008-06-01
Geometry Interpolation The function space , VpH , consists of discontinuous, piecewise-polynomials. This work used a polynomial basis for VpH such...between a piecewise-constant and smooth variation of viscosity in both a one- dimensional and multi- dimensional setting. Before continuing with the ...inviscid, transonic flow past a NACA 0012 at zero angle of attack and freestream Mach number of M∞ = 0.95. The
Distributed optical fiber-based monitoring approach of spatial seepage behavior in dike engineering
NASA Astrophysics Data System (ADS)
Su, Huaizhi; Ou, Bin; Yang, Lifu; Wen, Zhiping
2018-07-01
The failure caused by seepage is the most common one in dike engineering. As to the characteristics of seepage in dike, such as longitudinal extension engineering, the randomness, strong concealment and small initial quantity order, by means of distributed fiber temperature sensor system (DTS), adopting an improved optical fiber layer layout scheme, the location of initial interpolation point of the saturation line is obtained. With the barycentric Lagrange interpolation collocation method (BLICM), the infiltrated surface of dike full-section is generated. Combined with linear optical fiber monitoring seepage method, BLICM is applied in an engineering case, which shows that a real-time seepage monitoring technique is presented in full-section of dike based on the combination method.
NASA Astrophysics Data System (ADS)
Guo, Tongqing; Chen, Hao; Lu, Zhiliang
2018-05-01
Aiming at extremely large deformation, a novel predictor-corrector-based dynamic mesh method for multi-block structured grid is proposed. In this work, the dynamic mesh generation is completed in three steps. At first, some typical dynamic positions are selected and high-quality multi-block grids with the same topology are generated at those positions. Then, Lagrange interpolation method is adopted to predict the dynamic mesh at any dynamic position. Finally, a rapid elastic deforming technique is used to correct the small deviation between the interpolated geometric configuration and the actual instantaneous one. Compared with the traditional methods, the results demonstrate that the present method shows stronger deformation ability and higher dynamic mesh quality.
Developing the Polynomial Expressions for Fields in the ITER Tokamak
NASA Astrophysics Data System (ADS)
Sharma, Stephen
2017-10-01
The two most important problems to be solved in the development of working nuclear fusion power plants are: sustained partial ignition and turbulence. These two phenomena are the subject of research and investigation through the development of analytic functions and computational models. Ansatz development through Gaussian wave-function approximations, dielectric quark models, field solutions using new elliptic functions, and better descriptions of the polynomials of the superconducting current loops are the critical theoretical developments that need to be improved. Euler-Lagrange equations of motion in addition to geodesic formulations generate the particle model which should correspond to the Dirac dispersive scattering coefficient calculations and the fluid plasma model. Feynman-Hellman formalism and Heaviside step functional forms are introduced to the fusion equations to produce simple expressions for the kinetic energy and loop currents. Conclusively, a polynomial description of the current loops, the Biot-Savart field, and the Lagrangian must be uncovered before there can be an adequate computational and iterative model of the thermonuclear plasma.
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-02-27
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10 16 electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area.
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-01-01
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area. PMID:28264424
2013-01-01
Gravity Wave. A slice of the potential temperature perturbation (at y=50 km) after 700 s for 30× 30× 5 elements with 4th-order polynomials . The contour...CONSTANTINESCU ‡ Key words. cloud-resolving model; compressible flow; element-based Galerkin methods; Euler; global model; IMEX; Lagrange; Legendre ...methods in terms of accuracy and efficiency for two types of geophysical fluid dynamics problems: buoyant convection and inertia- gravity waves. These
A hybridized formulation for the weak Galerkin mixed finite element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
This paper presents a hybridized formulation for the weak Galerkin mixed finite element method (WG-MFEM) which was introduced and analyzed in Wang and Ye (2014) for second order elliptic equations. The WG-MFEM method was designed by using discontinuous piecewise polynomials on finite element partitions consisting of polygonal or polyhedral elements of arbitrary shape. The key to WG-MFEM is the use of a discrete weak divergence operator which is defined and computed by solving inexpensive problems locally on each element. The hybridized formulation of this paper leads to a significantly reduced system of linear equations involving only the unknowns arising frommore » the Lagrange multiplier in hybridization. Optimal-order error estimates are derived for the hybridized WG-MFEM approximations. In conclusion, some numerical results are reported to confirm the theory and a superconvergence for the Lagrange multiplier.« less
A hybridized formulation for the weak Galerkin mixed finite element method
Mu, Lin; Wang, Junping; Ye, Xiu
2016-01-14
This paper presents a hybridized formulation for the weak Galerkin mixed finite element method (WG-MFEM) which was introduced and analyzed in Wang and Ye (2014) for second order elliptic equations. The WG-MFEM method was designed by using discontinuous piecewise polynomials on finite element partitions consisting of polygonal or polyhedral elements of arbitrary shape. The key to WG-MFEM is the use of a discrete weak divergence operator which is defined and computed by solving inexpensive problems locally on each element. The hybridized formulation of this paper leads to a significantly reduced system of linear equations involving only the unknowns arising frommore » the Lagrange multiplier in hybridization. Optimal-order error estimates are derived for the hybridized WG-MFEM approximations. In conclusion, some numerical results are reported to confirm the theory and a superconvergence for the Lagrange multiplier.« less
Minimum Sobolev norm interpolation of scattered derivative data
NASA Astrophysics Data System (ADS)
Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.
2018-07-01
We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.
Interpolating Polynomial Macro-Elements with Tension Properties
2000-01-01
Univ. Calgary, 1978. Paolo Costantini Dipartimento di Matematica " Roberto Magari" Via del Capitano 15 53100 Siena, Italy costantini~unisi. it Carla...Manni Dipartimento di Matematica Via Carlo Alberto 10 10123 Torino, Italy manniDdm .unito. it
Polynomial-interpolation algorithm for van der Pauw Hall measurement in a metal hydride film
NASA Astrophysics Data System (ADS)
Koon, D. W.; Ares, J. R.; Leardini, F.; Fernández, J. F.; Ferrer, I. J.
2008-10-01
We apply a four-term polynomial-interpolation extension of the van der Pauw Hall measurement technique to a 330 nm Mg-Pd bilayer during both absorption and desorption of hydrogen at room temperature. We show that standard versions of the van der Pauw DC Hall measurement technique produce an error of over 100% due to a drifting offset signal and can lead to unphysical interpretations of the physical processes occurring in this film. The four-term technique effectively removes this source of error, even when the offset signal is drifting by an amount larger than the Hall signal in the time interval between successive measurements. This technique can be used to increase the resolution of transport studies of any material in which the resistivity is rapidly changing, particularly when the material is changing from metallic to insulating behavior.
High resolution frequency analysis techniques with application to the redshift experiment
NASA Technical Reports Server (NTRS)
Decher, R.; Teuber, D.
1975-01-01
High resolution frequency analysis methods, with application to the gravitational probe redshift experiment, are discussed. For this experiment a resolution of .00001 Hz is required to measure a slowly varying, low frequency signal of approximately 1 Hz. Major building blocks include fast Fourier transform, discrete Fourier transform, Lagrange interpolation, golden section search, and adaptive matched filter technique. Accuracy, resolution, and computer effort of these methods are investigated, including test runs on an IBM 360/65 computer.
Ortho Image and DTM Generation with Intelligent Methods
NASA Astrophysics Data System (ADS)
Bagheri, H.; Sadeghian, S.
2013-10-01
Nowadays the artificial intelligent algorithms has considered in GIS and remote sensing. Genetic algorithm and artificial neural network are two intelligent methods that are used for optimizing of image processing programs such as edge extraction and etc. these algorithms are very useful for solving of complex program. In this paper, the ability and application of genetic algorithm and artificial neural network in geospatial production process like geometric modelling of satellite images for ortho photo generation and height interpolation in raster Digital Terrain Model production process is discussed. In first, the geometric potential of Ikonos-2 and Worldview-2 with rational functions, 2D & 3D polynomials were tested. Also comprehensive experiments have been carried out to evaluate the viability of the genetic algorithm for optimization of rational function, 2D & 3D polynomials. Considering the quality of Ground Control Points, the accuracy (RMSE) with genetic algorithm and 3D polynomials method for Ikonos-2 Geo image was 0.508 pixel sizes and the accuracy (RMSE) with GA algorithm and rational function method for Worldview-2 image was 0.930 pixel sizes. For more another optimization artificial intelligent methods, neural networks were used. With the use of perceptron network in Worldview-2 image, a result of 0.84 pixel sizes with 4 neurons in middle layer was gained. The final conclusion was that with artificial intelligent algorithms it is possible to optimize the existing models and have better results than usual ones. Finally the artificial intelligence methods, like genetic algorithms as well as neural networks, were examined on sample data for optimizing interpolation and for generating Digital Terrain Models. The results then were compared with existing conventional methods and it appeared that these methods have a high capacity in heights interpolation and that using these networks for interpolating and optimizing the weighting methods based on inverse distance leads to a high accurate estimation of heights.
Method for Grey Scale Mapping of Underground Obstacles Using Video Pulse Radar Return
1978-12-01
the botton of Figure 3 is apparent. However, Figure 4 exhibits the one observed weakness of the Lagrange (or any polynomial) method. Large...t. . .. MWISS.S*4SW. - . W. . . SSV*V~t~fl~d*W*.StS . W ..... *W... *sn AW ~oo .*IB.Se.ve SWWZ.WWT...W. . . WVW.S* . *SS~SUA*.. ........ 0.305 mi stilf...5533 .. ... tel +lllt~llllllllllllllllll~l~ll............. . : .......lI~lI I llll~~ll .---I e~ ---- ..... lot I s 1 1i ....... IEr .5C
Grid Effect on Spherical Shallow Water Jets Using Continuous and Discontinuous Galerkin Methods
2013-01-01
The high-order Legendre -Gauss-Lobatto (LGL) points are added to the linear grid by projecting the linear elements onto the auxiliary gnomonic space...mapping, the triangles are subdivided into smaller ones by a Lagrange polynomial of order nI . The number of quadrilateral elements and grid points of...of the acceleration of gravity and the vertical height of the fluid), ν∇2 is the artificial viscosity term of viscous coefficient ν = 1× 105 m2 s−1
Analytical Solution for the Free Vibration Analysis of Delaminated Timoshenko Beams
Abedi, Maryam
2014-01-01
This work presents a method to find the exact solutions for the free vibration analysis of a delaminated beam based on the Timoshenko type with different boundary conditions. The solutions are obtained by the method of Lagrange multipliers in which the free vibration problem is posed as a constrained variational problem. The Legendre orthogonal polynomials are used as the beam eigenfunctions. Natural frequencies and mode shapes of various Timoshenko beams are presented to demonstrate the efficiency of the methodology. PMID:24574879
Minimal norm constrained interpolation. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Irvine, L. D.
1985-01-01
In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.
Dynamic graphs, community detection, and Riemannian geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakker, Craig; Halappanavar, Mahantesh; Visweswara Sathanur, Arun
A community is a subset of a wider network where the members of that subset are more strongly connected to each other than they are to the rest of the network. In this paper, we consider the problem of identifying and tracking communities in graphs that change over time {dynamic community detection} and present a framework based on Riemannian geometry to aid in this task. Our framework currently supports several important operations such as interpolating between and averaging over graph snapshots. We compare these Riemannian methods with entry-wise linear interpolation and that the Riemannian methods are generally better suited tomore » dynamic community detection. Next steps with the Riemannian framework include developing higher-order interpolation methods (e.g. the analogues of polynomial and spline interpolation) and a Riemannian least-squares regression method for working with noisy data.« less
NASA Astrophysics Data System (ADS)
Baturin, A. P.
2011-07-01
The results of major planets' and Moon's ephemerides smoothing by cubic polynomials are presented. Considered ephemerides are DE405, DE406, DE408, DE421, DE423 and DE722. The goal of the smoothig is an elimination of discontinu-ous behavior of interpolated coordinates and their derivatives at the junctions of adjacent interpolation intervals when calculations are made with 34-digit decimal accuracy. The reason of such a behavior is a limited 16-digit decimal accuracy of coefficients in ephemerides for interpolating Chebyshev's polynomials. Such discontinuity of perturbing bodies' coordinates signifi-cantly reduces the advantages of 34-digit calculations because the accuracy of numerical integration of asteroids' motion equations increases in this case just by 3 orders to compare with 16-digit calculations. It is demonstrated that the cubic-polynomial smoothing of ephemerides results in elimination of jumps of perturbing bodies' coordinates and their derivatives. This leads to increasing of numerical integration accuracy by 7-9 orders. All calculations in this work were made with 34-digit decimal accuracy on the computer cluster "Skif Cyberia" of Tomsk State University.
NASA Astrophysics Data System (ADS)
Bilchenko, G. G.; Bilchenko, N. G.
2018-03-01
The hypersonic aircraft permeable surfaces heat and mass transfer effective control mathematical modeling problems are considered. The analysis of the control (the blowing) constructive and gasdynamical restrictions is carried out for the porous and perforated surfaces. The functions classes allowing realize the controls taking into account the arising types of restrictions are suggested. Estimates of the computational complexity of the W. G. Horner scheme application in the case of using the C. Hermite interpolation polynomial are given.
Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators
NASA Technical Reports Server (NTRS)
Fantini, Jay A.
1998-01-01
Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.
Design of an essentially non-oscillatory reconstruction procedure on finite-element type meshes
NASA Technical Reports Server (NTRS)
Abgrall, R.
1991-01-01
An essentially non-oscillatory reconstruction for functions defined on finite-element type meshes was designed. Two related problems are studied: the interpolation of possibly unsmooth multivariate functions on arbitrary meshes and the reconstruction of a function from its average in the control volumes surrounding the nodes of the mesh. Concerning the first problem, we have studied the behavior of the highest coefficients of the Lagrange interpolation function which may admit discontinuities of locally regular curves. This enables us to choose the best stencil for the interpolation. The choice of the smallest possible number of stencils is addressed. Concerning the reconstruction problem, because of the very nature of the mesh, the only method that may work is the so called reconstruction via deconvolution method. Unfortunately, it is well suited only for regular meshes as we show, but we also show how to overcome this difficulty. The global method has the expected order of accuracy but is conservative up to a high order quadrature formula only. Some numerical examples are given which demonstrate the efficiency of the method.
Optimized Quasi-Interpolators for Image Reconstruction.
Sacht, Leonardo; Nehab, Diego
2015-12-01
We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.
Interpolation for de-Dopplerisation
NASA Astrophysics Data System (ADS)
Graham, W. R.
2018-05-01
'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.
NASA Astrophysics Data System (ADS)
Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.
2013-09-01
One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.
Allen, Robert C; Rutan, Sarah C
2011-10-31
Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.
C library for topological study of the electronic charge density.
Vega, David; Aray, Yosslen; Rodríguez, Jesús
2012-12-05
The topological study of the electronic charge density is useful to obtain information about the kinds of bonds (ionic or covalent) and the atom charges on a molecule or crystal. For this study, it is necessary to calculate, at every space point, the electronic density and its electronic density derivatives values up to second order. In this work, a grid-based method for these calculations is described. The library, implemented for three dimensions, is based on a multidimensional Lagrange interpolation in a regular grid; by differentiating the resulting polynomial, the gradient vector, the Hessian matrix and the Laplacian formulas were obtained for every space point. More complex functions such as the Newton-Raphson method (to find the critical points, where the gradient is null) and the Cash-Karp Runge-Kutta method (used to make the gradient paths) were programmed. As in some crystals, the unit cell has angles different from 90°, the described library includes linear transformations to correct the gradient and Hessian when the grid is distorted (inclined). Functions were also developed to handle grid containing files (grd from DMol® program, CUBE from Gaussian® program and CHGCAR from VASP® program). Each one of these files contains the data for a molecular or crystal electronic property (such as charge density, spin density, electrostatic potential, and others) in a three-dimensional (3D) grid. The library can be adapted to make the topological study in any regular 3D grid by modifying the code of these functions. Copyright © 2012 Wiley Periodicals, Inc.
Polynomial Approximation of Functions: Historical Perspective and New Tools
ERIC Educational Resources Information Center
Kidron, Ivy
2003-01-01
This paper examines the effect of applying symbolic computation and graphics to enhance students' ability to move from a visual interpretation of mathematical concepts to formal reasoning. The mathematics topics involved, Approximation and Interpolation, were taught according to their historical development, and the students tried to follow the…
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.
An Experimental Weight Function Method for Stress Intensity Factor Calibration.
1980-04-01
in accuracy to the ones obtained by Macha (Reference 10) for the laser interferometry technique. The values of KI from the interpolating polynomial...Measurement. Air Force Material Laboratories, AFML-TR-74-75, July 1974. 10. D. E. Macha , W. N. Sharpe Jr., and A. F. Grandt Jr., A Laser Interferometry
A GENERAL ALGORITHM FOR THE CONSTRUCTION OF CONTOUR PLOTS
NASA Technical Reports Server (NTRS)
Johnson, W.
1994-01-01
The graphical presentation of experimentally or theoretically generated data sets frequently involves the construction of contour plots. A general computer algorithm has been developed for the construction of contour plots. The algorithm provides for efficient and accurate contouring with a modular approach which allows flexibility in modifying the algorithm for special applications. The algorithm accepts as input data values at a set of points irregularly distributed over a plane. The algorithm is based on an interpolation scheme in which the points in the plane are connected by straight line segments to form a set of triangles. In general, the data is smoothed using a least-squares-error fit of the data to a bivariate polynomial. To construct the contours, interpolation along the edges of the triangles is performed, using the bivariable polynomial if data smoothing was performed. Once the contour points have been located, the contour may be drawn. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 series computer with a central memory requirement of approximately 100K of 8-bit bytes. This computer algorithm was developed in 1981.
Solving fractional optimal control problems within a Chebyshev-Legendre operational technique
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Ezz-Eldien, S. S.; Doha, E. H.; Abdelkawy, M. A.; Baleanu, D.
2017-06-01
In this manuscript, we report a new operational technique for approximating the numerical solution of fractional optimal control (FOC) problems. The operational matrix of the Caputo fractional derivative of the orthonormal Chebyshev polynomial and the Legendre-Gauss quadrature formula are used, and then the Lagrange multiplier scheme is employed for reducing such problems into those consisting of systems of easily solvable algebraic equations. We compare the approximate solutions achieved using our approach with the exact solutions and with those presented in other techniques and we show the accuracy and applicability of the new numerical approach, through two numerical examples.
NASA Astrophysics Data System (ADS)
Ezz-Eldien, S. S.; Doha, E. H.; Bhrawy, A. H.; El-Kalaawy, A. A.; Machado, J. A. T.
2018-04-01
In this paper, we propose a new accurate and robust numerical technique to approximate the solutions of fractional variational problems (FVPs) depending on indefinite integrals with a type of fixed Riemann-Liouville fractional integral. The proposed technique is based on the shifted Chebyshev polynomials as basis functions for the fractional integral operational matrix (FIOM). Together with the Lagrange multiplier method, these problems are then reduced to a system of algebraic equations, which greatly simplifies the solution process. Numerical examples are carried out to confirm the accuracy, efficiency and applicability of the proposed algorithm
Uncertainty propagation in the calibration equations for NTC thermistors
NASA Astrophysics Data System (ADS)
Liu, Guang; Guo, Liang; Liu, Chunlong; Wu, Qingwen
2018-06-01
The uncertainty propagation problem is quite important for temperature measurements, since we rely so much on the sensors and calibration equations. Although uncertainty propagation for platinum resistance or radiation thermometers is well known, there have been few publications concerning negative temperature coefficient (NTC) thermistors. Insight into the propagation characteristics of uncertainty that develop when equations are determined using the Lagrange interpolation or least-squares fitting method is presented here with respect to several of the most common equations used in NTC thermistor calibration. Within this work, analytical expressions of the propagated uncertainties for both fitting methods are derived for the uncertainties in the measured temperature and resistance at each calibration point. High-precision calibration of an NTC thermistor in a precision water bath was performed by means of the comparison method. Results show that, for both fitting methods, the propagated uncertainty is flat in the interpolation region but rises rapidly beyond the calibration range. Also, for temperatures interpolated between calibration points, the propagated uncertainty is generally no greater than that associated with the calibration points. For least-squares fitting, the propagated uncertainty is significantly reduced by increasing the number of calibration points and can be well kept below the uncertainty of the calibration points.
Development of a Boundary Layer Property Interpolation Tool in Support of Orbiter Return To Flight
NASA Technical Reports Server (NTRS)
Greene, Francis A.; Hamilton, H. Harris
2006-01-01
A new tool was developed to predict the boundary layer quantities required by several physics-based predictive/analytic methods that assess damaged Orbiter tile. This new tool, the Boundary Layer Property Prediction (BLPROP) tool, supplies boundary layer values used in correlations that determine boundary layer transition onset and surface heating-rate augmentation/attenuation factors inside tile gouges (i.e. cavities). BLPROP interpolates through a database of computed solutions and provides boundary layer and wall data (delta, theta, Re(sub theta)/M(sub e), Re(sub theta)/M(sub e), Re(sub theta), P(sub w), and q(sub w)) based on user input surface location and free stream conditions. Surface locations are limited to the Orbiter s windward surface. Constructed using predictions from an inviscid w/boundary-layer method and benchmark viscous CFD, the computed database covers the hypersonic continuum flight regime based on two reference flight trajectories. First-order one-dimensional Lagrange interpolation accounts for Mach number and angle-of-attack variations, whereas non-dimensional normalization accounts for differences between the reference and input Reynolds number. Employing the same computational methods used to construct the database, solutions at other trajectory points taken from previous STS flights were computed: these results validate the BLPROP algorithm. Percentage differences between interpolated and computed values are presented and are used to establish the level of uncertainty of the new tool.
Modeling Propagation of Shock Waves in Metals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howard, W M; Molitoris, J D
2005-08-19
We present modeling results for the propagation of strong shock waves in metals. In particular, we use an arbitrary Lagrange Eulerian (ALE3D) code to model the propagation of strong pressure waves (P {approx} 300 to 400 kbars) generated with high explosives in contact with aluminum cylinders. The aluminum cylinders are assumed to be both flat-topped and have large-amplitude curved surfaces. We use 3D Lagrange mechanics. For the aluminum we use a rate-independent Steinberg-Guinan model, where the yield strength and shear modulus depend on pressure, density and temperature. The calculation of the melt temperature is based on the Lindermann law. Atmore » melt the yield strength and shear modulus is set to zero. The pressure is represented as a seven-term polynomial as a function of density. For the HMX-based high explosive, we use a JWL, with a program burn model that give the correct detonation velocity and C-J pressure (P {approx} 390 kbars). For the case of the large-amplitude curved surface, we discuss the evolving shock structure in terms of the early shock propagation experiments by Sakharov.« less
Modeling Propagation of Shock Waves in Metals
NASA Astrophysics Data System (ADS)
Howard, W. M.; Molitoris, J. D.
2006-07-01
We present modeling results for the propagation of strong shock waves in metals. In particular, we use an arbitrary Lagrange Eulerian (ALE3D) code to model the propagation of strong pressure waves (P ˜ 300 to 400 kbars) generated with high explosives in contact with aluminum cylinders. The aluminum cylinders are assumed to be both flat-topped and have large-amplitude curved surfaces. We use 3D Lagrange mechanics. For the aluminum we use a rate-independent Steinberg-Guinan model, where the yield strength and shear modulus depend on pressure, density and temperature. The calculation of the melt temperature is based on the Lindermann law. At melt the yield strength and shear modulus is set to zero. The pressure is represented as a seven-term polynomial as a function of density. For the HMX-based high explosive, we use a JWL, with a program burn model that give the correct detonation velocity and C-J pressure (P ˜ 390 kbars). For the case of the large-amplitude curved surface, we discuss the evolving shock structure in terms of the early shock propagation experiments by Sakharov.
Alvermann, A; Fehske, H
2009-04-17
We propose a general numerical approach to open quantum systems with a coupling to bath degrees of freedom. The technique combines the methodology of polynomial expansions of spectral functions with the sparse grid concept from interpolation theory. Thereby we construct a Hilbert space of moderate dimension to represent the bath degrees of freedom, which allows us to perform highly accurate and efficient calculations of static, spectral, and dynamic quantities using standard exact diagonalization algorithms. The strength of the approach is demonstrated for the phase transition, critical behavior, and dissipative spin dynamics in the spin-boson model.
Expressions for Fields in the ITER Tokamak
NASA Astrophysics Data System (ADS)
Sharma, Stephen
2017-10-01
The two most important problems to be solved in the development of working nuclear fusion power plants are: sustained partial ignition and turbulence. These two phenomenon are the subject of research and investigation through the development of analytic functions and computational models. Ansatz development through Gaussian wave-function approximations, dielectric quark models, field solutions using new elliptic functions, and better descriptions of the polynomials of the superconducting current loops are the critical theoretical developments that need to be improved. Euler-Lagrange equations of motion in addition to geodesic formulations generate the particle model which should correspond to the Dirac dispersive scattering coefficient calculations and the fluid plasma model. Feynman-Hellman formalism and Heaviside step functional forms are introduced to the fusion equations to produce simple expressions for the kinetic energy and loop currents. Conclusively, a polynomial description of the current loops, the Biot-Savart field, and the Lagrangian must be uncovered before there can be an adequate computational and iterative model of the thermonuclear plasma.
Spectral multigrid methods for elliptic equations 2
NASA Technical Reports Server (NTRS)
Zang, T. A.; Wong, Y. S.; Hussaini, M. Y.
1983-01-01
A detailed description of spectral multigrid methods is provided. This includes the interpolation and coarse-grid operators for both periodic and Dirichlet problems. The spectral methods for periodic problems use Fourier series and those for Dirichlet problems are based upon Chebyshev polynomials. An improved preconditioning for Dirichlet problems is given. Numerical examples and practical advice are included.
Discrete-Layer Piezoelectric Plate and Shell Models for Active Tip-Clearance Control
NASA Technical Reports Server (NTRS)
Heyliger, P. R.; Ramirez, G.; Pei, K. C.
1994-01-01
The objectives of this work were to develop computational tools for the analysis of active-sensory composite structures with added or embedded piezoelectric layers. The targeted application for this class of smart composite laminates and the analytical development is the accomplishment of active tip-clearance control in turbomachinery components. Two distinct theories and analytical models were developed and explored under this contract: (1) a discrete-layer plate theory and corresponding computational models, and (2) a three dimensional general discrete-layer element generated in curvilinear coordinates for modeling laminated composite piezoelectric shells. Both models were developed from the complete electromechanical constitutive relations of piezoelectric materials, and incorporate both displacements and potentials as state variables. This report describes the development and results of these models. The discrete-layer theories imply that the displacement field and electrostatic potential through-the-thickness of the laminate are described over an individual layer rather than as a smeared function over the thickness of the entire plate or shell thickness. This is especially crucial for composites with embedded piezoelectric layers, as the actuating and sensing elements within these layers are poorly represented by effective or smeared properties. Linear Lagrange interpolation polynomials were used to describe the through-thickness laminate behavior. Both analytic and finite element approximations were used in the plane or surface of the structure. In this context, theoretical developments are presented for the discrete-layer plate theory, the discrete-layer shell theory, and the formulation of an exact solution for simply-supported piezoelectric plates. Finally, evaluations and results from a number of separate examples are presented for the static and dynamic analysis of the plate geometry. Comparisons between the different approaches are provided when possible, and initial conclusions regarding the accuracy and limitations of these models are given.
An efficient implementation of a high-order filter for a cubed-sphere spectral element model
NASA Astrophysics Data System (ADS)
Kang, Hyun-Gyu; Cheong, Hyeong-Bin
2017-03-01
A parallel-scalable, isotropic, scale-selective spatial filter was developed for the cubed-sphere spectral element model on the sphere. The filter equation is a high-order elliptic (Helmholtz) equation based on the spherical Laplacian operator, which is transformed into cubed-sphere local coordinates. The Laplacian operator is discretized on the computational domain, i.e., on each cell, by the spectral element method with Gauss-Lobatto Lagrange interpolating polynomials (GLLIPs) as the orthogonal basis functions. On the global domain, the discrete filter equation yielded a linear system represented by a highly sparse matrix. The density of this matrix increases quadratically (linearly) with the order of GLLIP (order of the filter), and the linear system is solved in only O (Ng) operations, where Ng is the total number of grid points. The solution, obtained by a row reduction method, demonstrated the typical accuracy and convergence rate of the cubed-sphere spectral element method. To achieve computational efficiency on parallel computers, the linear system was treated by an inverse matrix method (a sparse matrix-vector multiplication). The density of the inverse matrix was lowered to only a few times of the original sparse matrix without degrading the accuracy of the solution. For better computational efficiency, a local-domain high-order filter was introduced: The filter equation is applied to multiple cells, and then the central cell was only used to reconstruct the filtered field. The parallel efficiency of applying the inverse matrix method to the global- and local-domain filter was evaluated by the scalability on a distributed-memory parallel computer. The scale-selective performance of the filter was demonstrated on Earth topography. The usefulness of the filter as a hyper-viscosity for the vorticity equation was also demonstrated.
Investigations into the shape-preserving interpolants using symbolic computation
NASA Technical Reports Server (NTRS)
Lam, Maria
1988-01-01
Shape representation is a central issue in computer graphics and computer-aided geometric design. Many physical phenomena involve curves and surfaces that are monotone (in some directions) or are convex. The corresponding representation problem is given some monotone or convex data, and a monotone or convex interpolant is found. Standard interpolants need not be monotone or convex even though they may match monotone or convex data. Most of the methods of investigation of this problem involve the utilization of quadratic splines or Hermite polynomials. In this investigation, a similar approach is adopted. These methods require derivative information at the given data points. The key to the problem is the selection of the derivative values to be assigned to the given data points. Schemes for choosing derivatives were examined. Along the way, fitting given data points by a conic section has also been investigated as part of the effort to study shape-preserving quadratic splines.
Mapping Landslides in Lunar Impact Craters Using Chebyshev Polynomials and Dem's
NASA Astrophysics Data System (ADS)
Yordanov, V.; Scaioni, M.; Brunetti, M. T.; Melis, M. T.; Zinzi, A.; Giommi, P.
2016-06-01
Geological slope failure processes have been observed on the Moon surface for decades, nevertheless a detailed and exhaustive lunar landslide inventory has not been produced yet. For a preliminary survey, WAC images and DEM maps from LROC at 100 m/pixels have been exploited in combination with the criteria applied by Brunetti et al. (2015) to detect the landslides. These criteria are based on the visual analysis of optical images to recognize mass wasting features. In the literature, Chebyshev polynomials have been applied to interpolate crater cross-sections in order to obtain a parametric characterization useful for classification into different morphological shapes. Here a new implementation of Chebyshev polynomial approximation is proposed, taking into account some statistical testing of the results obtained during Least-squares estimation. The presence of landslides in lunar craters is then investigated by analyzing the absolute values off odd coefficients of estimated Chebyshev polynomials. A case study on the Cassini A crater has demonstrated the key-points of the proposed methodology and outlined the required future development to carry out.
NASA Technical Reports Server (NTRS)
Krishnamurthy, T.; Romero, V. J.
2002-01-01
The usefulness of piecewise polynomials with C1 and C2 derivative continuity for response surface construction method is examined. A Moving Least Squares (MLS) method is developed and compared with four other interpolation methods, including kriging. First the selected methods are applied and compared with one another in a two-design variables problem with a known theoretical response function. Next the methods are tested in a four-design variables problem from a reliability-based design application. In general the piecewise polynomial with higher order derivative continuity methods produce less error in the response prediction. The MLS method was found to be superior for response surface construction among the methods evaluated.
Some Applications of Gröbner Bases in Robotics and Engineering
NASA Astrophysics Data System (ADS)
Abłamowicz, Rafał
Gröbner bases in polynomial rings have numerous applications in geometry, applied mathematics, and engineering. We show a few applications of Gröbner bases in robotics, formulated in the language of Clifford algebras, and in engineering to the theory of curves, including Fermat and Bézier cubics, and interpolation functions used in finite element theory.
A Final Approach Trajectory Model for Current Operations
NASA Technical Reports Server (NTRS)
Gong, Chester; Sadovsky, Alexander
2010-01-01
Predicting accurate trajectories with limited intent information is a challenge faced by air traffic management decision support tools in operation today. One such tool is the FAA's Terminal Proximity Alert system which is intended to assist controllers in maintaining safe separation of arrival aircraft during final approach. In an effort to improve the performance of such tools, two final approach trajectory models are proposed; one based on polynomial interpolation, the other on the Fourier transform. These models were tested against actual traffic data and used to study effects of the key final approach trajectory modeling parameters of wind, aircraft type, and weight class, on trajectory prediction accuracy. Using only the limited intent data available to today's ATM system, both the polynomial interpolation and Fourier transform models showed improved trajectory prediction accuracy over a baseline dead reckoning model. Analysis of actual arrival traffic showed that this improved trajectory prediction accuracy leads to improved inter-arrival separation prediction accuracy for longer look ahead times. The difference in mean inter-arrival separation prediction error between the Fourier transform and dead reckoning models was 0.2 nmi for a look ahead time of 120 sec, a 33 percent improvement, with a corresponding 32 percent improvement in standard deviation.
Creasy, Arch; Reck, Jason; Pabst, Timothy; Hunter, Alan; Barker, Gregory; Carta, Giorgio
2018-05-29
A previously developed empirical interpolation (EI) method is extended to predict highly overloaded multicomponent elution behavior on a cation exchange (CEX) column based on batch isotherm data. Instead of a fully mechanistic model, the EI method employs an empirically modified multicomponent Langmuir equation to correlate two-component adsorption isotherm data at different salt concentrations. Piecewise cubic interpolating polynomials are then used to predict competitive binding at intermediate salt concentrations. The approach is tested for the separation of monoclonal antibody monomer and dimer mixtures by gradient elution on the cation exchange resin Nuvia HR-S. Adsorption isotherms are obtained over a range of salt concentrations with varying monomer and dimer concentrations. Coupled with a lumped kinetic model, the interpolated isotherms predict the column behavior for highly overloaded conditions. Predictions based on the EI method showed good agreement with experimental elution curves for protein loads up to 40 mg/mL column or about 50% of the column binding capacity. The approach can be extended to other chromatographic modalities and to more than two components. This article is protected by copyright. All rights reserved.
A Kernel-Free Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 4
NASA Technical Reports Server (NTRS)
Park, Young-Keun; Fahrenthold, Eric P.
2004-01-01
An improved hybrid particle-finite element method has been developed for the simulation of hypervelocity impact problems. Unlike alternative methods, the revised formulation computes the density without reference to any kernel or interpolation functions, for either the density or the rate of dilatation. This simplifies the state space model and leads to a significant reduction in computational cost. The improved method introduces internal energy variables as generalized coordinates in a new formulation of the thermomechanical Lagrange equations. Example problems show good agreement with exact solutions in one dimension and good agreement with experimental data in a three dimensional simulation.
Access Scheme for Controlling Mobile Agents and its Application to Share Medical Information.
Liao, Yu-Ting; Chen, Tzer-Shyong; Chen, Tzer-Long; Chung, Yu-Fang; Chen, Yu- Xin; Hwang, Jen-Hung; Wang, Huihui; Wei, Wei
2016-05-01
This study is showing the advantage of mobile agents to conquer heterogeneous system environments and contribute to a virtual integrated sharing system. Mobile agents will collect medical information from each medical institution as a method to achieve the medical purpose of data sharing. Besides, this research also provides an access control and key management mechanism by adopting Public key cryptography and Lagrange interpolation. The safety analysis of the system is based on a network attacker's perspective. The achievement of this study tries to improve the medical quality, prevent wasting medical resources and make medical resources access to appropriate configuration.
Modeling the Propagation of Shock Waves in Metals
NASA Astrophysics Data System (ADS)
Howard, W. Michael
2005-07-01
We present modeling results for the propagation of strong shock waves in metals. In particular, we use an arbitrary Lagrange Eulerian (ALE3D) code to model the propagation of strong pressure waves (P ˜300 to 400 kbars) generated with high explosives in contact with aluminum cylinders. The aluminum cylinders are assumed to be both flat-topped and have large-amplitude curved surfaces. We use 3D Lagrange mechanics. For the aluminum we use a rate-independent Steinberg-Guinan model, where the yield strength and bulk modulus depends on pressure, density and temperature. The calculation of the melt temperature is based on the Lindermann law. At melt the yield strength and bulk modulus is set to zero. The pressure is represented as a seven-term polynomial as a function of density. For the HMX-based high explosive, we use a JWL, with a program burn model that gives the correct detonation velocity and C-J pressure (P ˜ 390 kbars). For the case of the large-amplitude curved surface, we discuss the evolving shock structure in terms of the early shock propagation experiments by Sakharov. We also discuss the dependence of our results upon our material model for aluminum.
Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor
2018-01-01
Accurate prediction of water quality parameters (WQPs) is an important task in the management of water resources. Artificial neural networks (ANNs) are frequently applied for dissolved oxygen (DO) prediction, but often only their interpolation performance is checked. The aims of this research, beside interpolation, were the determination of extrapolation performance of ANN model, which was developed for the prediction of DO content in the Danube River, and the assessment of relationship between the significance of inputs and prediction error in the presence of values which were of out of the range of training. The applied ANN is a polynomial neural network (PNN) which performs embedded selection of most important inputs during learning, and provides a model in the form of linear and non-linear polynomial functions, which can then be used for a detailed analysis of the significance of inputs. Available dataset that contained 1912 monitoring records for 17 water quality parameters was split into a "regular" subset that contains normally distributed and low variability data, and an "extreme" subset that contains monitoring records with outlier values. The results revealed that the non-linear PNN model has good interpolation performance (R 2 =0.82), but it was not robust in extrapolation (R 2 =0.63). The analysis of extrapolation results has shown that the prediction errors are correlated with the significance of inputs. Namely, the out-of-training range values of the inputs with low importance do not affect significantly the PNN model performance, but their influence can be biased by the presence of multi-outlier monitoring records. Subsequently, linear PNN models were successfully applied to study the effect of water quality parameters on DO content. It was observed that DO level is mostly affected by temperature, pH, biological oxygen demand (BOD) and phosphorus concentration, while in extreme conditions the importance of alkalinity and bicarbonates rises over pH and BOD. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod
2010-06-01
Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.
Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation
Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi
2016-01-01
After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t′, n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely. PMID:27792784
Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation.
Yuan, Lifeng; Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi
2016-01-01
After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t', n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely.
General-Purpose Software For Computer Graphics
NASA Technical Reports Server (NTRS)
Rogers, Joseph E.
1992-01-01
NASA Device Independent Graphics Library (NASADIG) is general-purpose computer-graphics package for computer-based engineering and management applications which gives opportunity to translate data into effective graphical displays for presentation. Features include two- and three-dimensional plotting, spline and polynomial interpolation, control of blanking of areas, multiple log and/or linear axes, control of legends and text, control of thicknesses of curves, and multiple text fonts. Included are subroutines for definition of areas and axes of plots; setup and display of text; blanking of areas; setup of style, interpolation, and plotting of lines; control of patterns and of shading of colors; control of legends, blocks of text, and characters; initialization of devices; and setting of mixed alphabets. Written in FORTRAN 77.
Numerical Methods for Nonlinear Fokker-Planck Collision Operator in TEMPEST
NASA Astrophysics Data System (ADS)
Kerbel, G.; Xiong, Z.
2006-10-01
Early implementations of Fokker-Planck collision operator and moment computations in TEMPEST used low order polynomial interpolation schemes to reuse conservative operators developed for speed/pitch-angle (v, θ) coordinates. When this approach proved to be too inaccurate we developed an alternative higher order interpolation scheme for the Rosenbluth potentials and a high order finite volume method in TEMPEST (,) coordinates. The collision operator is thus generated by using the expansion technique in (v, θ) coordinates for the diffusion coefficients only, and then the fluxes for the conservative differencing are computed directly in the TEMPEST (,) coordinates. Combined with a cut-cell treatment at the turning-point boundary, this new approach is shown to have much better accuracy and conservation properties.
Finite elements based on consistently assumed stresses and displacements
NASA Technical Reports Server (NTRS)
Pian, T. H. H.
1985-01-01
Finite element stiffness matrices are derived using an extended Hellinger-Reissner principle in which internal displacements are added to serve as Lagrange multipliers to introduce the equilibrium constraint in each element. In a consistent formulation the assumed stresses are initially unconstrained and complete polynomials and the total displacements are also complete such that the corresponding strains are complete in the same order as the stresses. Several examples indicate that resulting properties for elements constructed by this consistent formulation are ideal and are less sensitive to distortions of element geometries. The method has been used to find the optimal stress terms for plane elements, 3-D solids, axisymmetric solids, and plate bending elements.
Instability of the cored barotropic disc: the linear eigenvalue formulation
NASA Astrophysics Data System (ADS)
Polyachenko, E. V.
2018-05-01
Gaseous rotating razor-thin discs are a testing ground for theories of spiral structure that try to explain appearance and diversity of disc galaxy patterns. These patterns are believed to arise spontaneously under the action of gravitational instability, but calculations of its characteristics in the gas are mostly obscured. The paper suggests a new method for finding the spiral patterns based on an expansion of small amplitude perturbations over Lagrange polynomials in small radial elements. The final matrix equation is extracted from the original hydrodynamical equations without the use of an approximate theory and has a form of the linear algebraic eigenvalue problem. The method is applied to a galactic model with the cored exponential density profile.
The Natural Neighbour Radial Point Interpolation Meshless Method Applied to the Non-Linear Analysis
NASA Astrophysics Data System (ADS)
Dinis, L. M. J. S.; Jorge, R. M. Natal; Belinha, J.
2011-05-01
In this work the Natural Neighbour Radial Point Interpolation Method (NNRPIM), is extended to large deformation analysis of elastic and elasto-plastic structures. The NNPRIM uses the Natural Neighbour concept in order to enforce the nodal connectivity and to create a node-depending background mesh, used in the numerical integration of the NNRPIM interpolation functions. Unlike the FEM, where geometrical restrictions on elements are imposed for the convergence of the method, in the NNRPIM there are no such restrictions, which permits a random node distribution for the discretized problem. The NNRPIM interpolation functions, used in the Galerkin weak form, are constructed using the Radial Point Interpolators, with some differences that modify the method performance. In the construction of the NNRPIM interpolation functions no polynomial base is required and the used Radial Basis Function (RBF) is the Multiquadric RBF. The NNRPIM interpolation functions posses the delta Kronecker property, which simplify the imposition of the natural and essential boundary conditions. One of the scopes of this work is to present the validation the NNRPIM in the large-deformation elasto-plastic analysis, thus the used non-linear solution algorithm is the Newton-Rapson initial stiffness method and the efficient "forward-Euler" procedure is used in order to return the stress state to the yield surface. Several non-linear examples, exhibiting elastic and elasto-plastic material properties, are studied to demonstrate the effectiveness of the method. The numerical results indicated that NNRPIM handles large material distortion effectively and provides an accurate solution under large deformation.
Slave finite elements: The temporal element approach to nonlinear analysis
NASA Technical Reports Server (NTRS)
Gellin, S.
1984-01-01
A formulation method for finite elements in space and time incorporating nonlinear geometric and material behavior is presented. The method uses interpolation polynomials for approximating the behavior of various quantities over the element domain, and only explicit integration over space and time. While applications are general, the plate and shell elements that are currently being programmed are appropriate to model turbine blades, vanes, and combustor liners.
A User’s Manual for Fiber Diffraction: The Automated Picker and Huber Diffractometers
1990-07-01
17 3. Layer line scan of degummed silk ( Bombyx mori ) ................................. 18...index (arbitrary units) Figure 3. Layer line scan of degummed silk ( Bombyx mori ) showing layers 0 through 6. If the fit is rejected, new values for... originally made at intervals larger than 0.010. The smoothing and interpolation is done by a least-squares polynomial fit to segments of the data. The number
High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.
Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong
2018-08-01
This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.
NASA Astrophysics Data System (ADS)
Sauer, Roger A.
2013-08-01
Recently an enriched contact finite element formulation has been developed that substantially increases the accuracy of contact computations while keeping the additional numerical effort at a minimum reported by Sauer (Int J Numer Meth Eng, 87: 593-616, 2011). Two enrich-ment strategies were proposed, one based on local p-refinement using Lagrange interpolation and one based on Hermite interpolation that produces C 1-smoothness on the contact surface. Both classes, which were initially considered for the frictionless Signorini problem, are extended here to friction and contact between deformable bodies. For this, a symmetric contact formulation is used that allows the unbiased treatment of both contact partners. This paper also proposes a post-processing scheme for contact quantities like the contact pressure. The scheme, which provides a more accurate representation than the raw data, is based on an averaging procedure that is inspired by mortar formulations. The properties of the enrichment strategies and the corresponding post-processing scheme are illustrated by several numerical examples considering sliding and peeling contact in the presence of large deformations.
Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis
NASA Astrophysics Data System (ADS)
Jiao, Yujian; Wang, Li-Lian; Huang, Can
2016-01-01
The purpose of this paper is twofold. Firstly, we provide explicit and compact formulas for computing both Caputo and (modified) Riemann-Liouville (RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case, it suffices to compute F-PSDM of order μ ∈ (0 , 1) to compute that of any order k + μ with integer k ≥ 0, while in the modified RL case, it is only necessary to evaluate a fractional integral matrix of order μ ∈ (0 , 1). Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems leading to new interpolation polynomial basis functions with remarkable properties: (i) the matrix generated from the new basis yields the exact inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest fractional derivative in a collocation scheme under the new basis is diagonal; and (iii) the resulted linear system is well-conditioned in the Caputo case, while in the modified RL case, the eigenvalues of the coefficient matrix are highly concentrated. In both cases, the linear systems of the collocation schemes using the new basis can be solved by an iterative solver within a few iterations. Notably, the inverse can be computed in a very stable manner, so this offers optimal preconditioners for usual fractional collocation methods for fractional differential equations (FDEs). It is also noteworthy that the choice of certain special JGL points with parameters related to the order of the equations can ease the implementation. We highlight that the use of the Bateman's fractional integral formulas and fast transforms between Jacobi polynomials with different parameters, is essential for our algorithm development.
Gong, Rui; Xu, Haisong; Tong, Qingfen
2012-10-20
The colorimetric characterization of active matrix organic light emitting diode (AMOLED) panels suffers from their poor channel independence. Based on the colorimetric characteristics evaluation of channel independence and chromaticity constancy, an accurate colorimetric characterization method, namely, the polynomial compensation model (PC model) considering channel interactions was proposed for AMOLED panels. In this model, polynomial expressions are employed to calculate the relationship between the prediction errors of XYZ tristimulus values and the digital inputs to compensate the XYZ prediction errors of the conventional piecewise linear interpolation assuming the variable chromaticity coordinates (PLVC) model. The experimental results indicated that the proposed PC model outperformed other typical characterization models for the two tested AMOLED smart-phone displays and for the professional liquid crystal display monitor as well.
NASA Astrophysics Data System (ADS)
Dechevsky, Lubomir T.; Bang, Børre; Laksa˚, Arne; Zanaty, Peter
2011-12-01
At the Seventh International Conference on Mathematical Methods for Curves and Surfaces, To/nsberg, Norway, in 2008, several new constructions for Hermite interpolation on scattered point sets in domains in Rn,n∈N, combined with smooth convex partition of unity for several general types of partitions of these domains were proposed in [1]. All of these constructions were based on a new type of B-splines, proposed by some of the authors several years earlier: expo-rational B-splines (ERBS) [3]. In the present communication we shall provide more details about one of these constructions: the one for the most general class of domain partitions considered. This construction is based on the use of two separate families of basis functions: one which has all the necessary Hermite interpolation properties, and another which has the necessary properties of a smooth convex partition of unity. The constructions of both of these two bases are well-known; the new part of the construction is the combined use of these bases for the derivation of a new basis which enjoys having all above-said interpolation and unity partition properties simultaneously. In [1] the emphasis was put on the use of radial basis functions in the definitions of the two initial bases in the construction; now we shall put the main emphasis on the case when these bases consist of tensor-product B-splines. This selection provides two useful advantages: (A) it is easier to compute higher-order derivatives while working in Cartesian coordinates; (B) it becomes clear that this construction becomes a far-going extension of tensor-product constructions. We shall provide 3-dimensional visualization of the resulting bivariate bases, using tensor-product ERBS. In the main tensor-product variant, we shall consider also replacement of ERBS with simpler generalized ERBS (GERBS) [2], namely, their simplified polynomial modifications: the Euler Beta-function B-splines (BFBS). One advantage of using BFBS instead of ERBS is the simplified computation, since BFBS are piecewise polynomial, which ERBS are not. One disadvantage of using BFBS in the place of ERBS in this construction is that the necessary selection of the degree of BFBS imposes constraints on the maximal possible multiplicity of the Hermite interpolation.
A geometrical interpretation of the 2n-th central difference
NASA Technical Reports Server (NTRS)
Tapia, R. A.
1972-01-01
Many algorithms used for data smoothing, data classification and error detection require the calculation of the distance from a point to the polynomial interpolating its 2n neighbors (n on each side). This computation, if performed naively, would require the solution of a system of equations and could create numerical problems. This note shows that if the data is equally spaced, then this calculation can be performed using a simple recursion formula.
NASA Technical Reports Server (NTRS)
Tal-Ezer, Hillel
1987-01-01
During the process of solving a mathematical model numerically, there is often a need to operate on a vector v by an operator which can be expressed as f(A) while A is NxN matrix (ex: exp(A), sin(A), A sup -1). Except for very simple matrices, it is impractical to construct the matrix f(A) explicitly. Usually an approximation to it is used. In the present research, an algorithm is developed which uses a polynomial approximation to f(A). It is reduced to a problem of approximating f(z) by a polynomial in z while z belongs to the domain D in the complex plane which includes all the eigenvalues of A. This problem of approximation is approached by interpolating the function f(z) in a certain set of points which is known to have some maximal properties. The approximation thus achieved is almost best. Implementing the algorithm to some practical problem is described. Since a solution to a linear system Ax = b is x= A sup -1 b, an iterative solution to it can be regarded as a polynomial approximation to f(A) = A sup -1. Implementing the algorithm in this case is also described.
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Breakthroughs in Low-Profile Leaky-Wave HPM Antennas
2015-12-21
sqrt(a^2-z1(n)^2); % drivative value, f’(z1(n)) w11 = atan(-fpz1); w1(n) = w11; % slope angle is stored to testing ...compensation. 3.5. Design approximation for the lower (PEC) wall of the LWA In this section we attempt to develop and test an algorithm for...Three different alternatives were tested : A function created by interpolating a polynomial that passes through all the computed points seemed to be a
Eulerian-Lagrangian solution of the convection-dispersion equation in natural coordinates
Cheng, Ralph T.; Casulli, Vincenzo; Milford, S. Nevil
1984-01-01
The vast majority of numerical investigations of transport phenomena use an Eulerian formulation for the convenience that the computational grids are fixed in space. An Eulerian-Lagrangian method (ELM) of solution for the convection-dispersion equation is discussed and analyzed. The ELM uses the Lagrangian concept in an Eulerian computational grid system. The values of the dependent variable off the grid are calculated by interpolation. When a linear interpolation is used, the method is a slight improvement over the upwind difference method. At this level of approximation both the ELM and the upwind difference method suffer from large numerical dispersion. However, if second-order Lagrangian polynomials are used in the interpolation, the ELM is proven to be free of artificial numerical dispersion for the convection-dispersion equation. The concept of the ELM is extended for treatment of anisotropic dispersion in natural coordinates. In this approach the anisotropic properties of dispersion can be conveniently related to the properties of the flow field. Several numerical examples are given to further substantiate the results of the present analysis.
A class of reduced-order models in the theory of waves and stability.
Chapman, C J; Sorokin, S V
2016-02-01
This paper presents a class of approximations to a type of wave field for which the dispersion relation is transcendental. The approximations have two defining characteristics: (i) they give the field shape exactly when the frequency and wavenumber lie on a grid of points in the (frequency, wavenumber) plane and (ii) the approximate dispersion relations are polynomials that pass exactly through points on this grid. Thus, the method is interpolatory in nature, but the interpolation takes place in (frequency, wavenumber) space, rather than in physical space. Full details are presented for a non-trivial example, that of antisymmetric elastic waves in a layer. The method is related to partial fraction expansions and barycentric representations of functions. An asymptotic analysis is presented, involving Stirling's approximation to the psi function, and a logarithmic correction to the polynomial dispersion relation.
The Adams formulas for numerical integration of differential equations from 1st to 20th order
NASA Technical Reports Server (NTRS)
Kirkpatrick, J. C.
1976-01-01
The Adams Bashforth predictor coefficients and the Adams Moulton corrector coefficients for the integration of differential equations are presented for methods of 1st to 20th order. The order of the method as presented refers to the highest order difference formula used in Newton's backward difference interpolation formula, on which the Adams method is based. The Adams method is a polynomial approximation method derived from Newton's backward difference interpolation formula. The Newton formula is derived and expanded to 20th order. The Adams predictor and corrector formulas are derived and expressed in terms of differences of the derivatives, as well as in terms of the derivatives themselves. All coefficients are given to 18 significant digits. For the difference formula only, the ratio coefficients are given to 10th order.
NASA Astrophysics Data System (ADS)
Do, Seongju; Li, Haojun; Kang, Myungjoo
2017-06-01
In this paper, we present an accurate and efficient wavelet-based adaptive weighted essentially non-oscillatory (WENO) scheme for hydrodynamics and ideal magnetohydrodynamics (MHD) equations arising from the hyperbolic conservation systems. The proposed method works with the finite difference weighted essentially non-oscillatory (FD-WENO) method in space and the third order total variation diminishing (TVD) Runge-Kutta (RK) method in time. The philosophy of this work is to use the lifted interpolating wavelets as not only detector for singularities but also interpolator. Especially, flexible interpolations can be performed by an inverse wavelet transformation. When the divergence cleaning method introducing auxiliary scalar field ψ is applied to the base numerical schemes for imposing divergence-free condition to the magnetic field in a MHD equation, the approximations to derivatives of ψ require the neighboring points. Moreover, the fifth order WENO interpolation requires large stencil to reconstruct high order polynomial. In such cases, an efficient interpolation method is necessary. The adaptive spatial differentiation method is considered as well as the adaptation of grid resolutions. In order to avoid the heavy computation of FD-WENO, in the smooth regions fixed stencil approximation without computing the non-linear WENO weights is used, and the characteristic decomposition method is replaced by a component-wise approach. Numerical results demonstrate that with the adaptive method we are able to resolve the solutions that agree well with the solution of the corresponding fine grid.
Secure Dynamic access control scheme of PHR in cloud computing.
Chen, Tzer-Shyong; Liu, Chia-Hui; Chen, Tzer-Long; Chen, Chin-Sheng; Bau, Jian-Guo; Lin, Tzu-Ching
2012-12-01
With the development of information technology and medical technology, medical information has been developed from traditional paper records into electronic medical records, which have now been widely applied. The new-style medical information exchange system "personal health records (PHR)" is gradually developed. PHR is a kind of health records maintained and recorded by individuals. An ideal personal health record could integrate personal medical information from different sources and provide complete and correct personal health and medical summary through the Internet or portable media under the requirements of security and privacy. A lot of personal health records are being utilized. The patient-centered PHR information exchange system allows the public autonomously maintain and manage personal health records. Such management is convenient for storing, accessing, and sharing personal medical records. With the emergence of Cloud computing, PHR service has been transferred to storing data into Cloud servers that the resources could be flexibly utilized and the operation cost can be reduced. Nevertheless, patients would face privacy problem when storing PHR data into Cloud. Besides, it requires a secure protection scheme to encrypt the medical records of each patient for storing PHR into Cloud server. In the encryption process, it would be a challenge to achieve accurately accessing to medical records and corresponding to flexibility and efficiency. A new PHR access control scheme under Cloud computing environments is proposed in this study. With Lagrange interpolation polynomial to establish a secure and effective PHR information access scheme, it allows to accurately access to PHR with security and is suitable for enormous multi-users. Moreover, this scheme also dynamically supports multi-users in Cloud computing environments with personal privacy and offers legal authorities to access to PHR. From security and effectiveness analyses, the proposed PHR access scheme in Cloud computing environments is proven flexible and secure and could effectively correspond to real-time appending and deleting user access authorization and appending and revising PHR records.
Coupling hydrodynamic and wave propagation modeling for waveform modeling of SPE.
NASA Astrophysics Data System (ADS)
Larmat, C. S.; Steedman, D. W.; Rougier, E.; Delorey, A.; Bradley, C. R.
2015-12-01
The goal of the Source Physics Experiment (SPE) is to bring empirical and theoretical advances to the problem of detection and identification of underground nuclear explosions. This paper presents effort to improve knowledge of the processes that affect seismic wave propagation from the hydrodynamic/plastic source region to the elastic/anelastic far field thanks to numerical modeling. The challenge is to couple the prompt processes that take place in the near source region to the ones taking place later in time due to wave propagation in complex 3D geologic environments. In this paper, we report on results of first-principles simulations coupling hydrodynamic simulation codes (Abaqus and CASH), with a 3D full waveform propagation code, SPECFEM3D. Abaqus and CASH model the shocked, hydrodynamic region via equations of state for the explosive, borehole stemming and jointed/weathered granite. LANL has been recently employing a Coupled Euler-Lagrange (CEL) modeling capability. This has allowed the testing of a new phenomenological model for modeling stored shear energy in jointed material. This unique modeling capability has enabled highfidelity modeling of the explosive, the weak grout-filled borehole, as well as the surrounding jointed rock. SPECFEM3D is based on the Spectral Element Method, a direct numerical method for full waveform modeling with mathematical accuracy (e.g. Komatitsch, 1998, 2002) thanks to its use of the weak formulation of the wave equation and of high-order polynomial functions. The coupling interface is a series of grid points of the SEM mesh situated at the edge of the hydrodynamic code domain. Displacement time series at these points are computed from output of CASH or Abaqus (by interpolation if needed) and fed into the time marching scheme of SPECFEM3D. We will present validation tests and waveforms modeled for several SPE tests conducted so far, with a special focus on effect of the local topography.
NASA Technical Reports Server (NTRS)
Poole, L. R.
1975-01-01
A study of the effects of using different methods for approximating bottom topography in a wave-refraction computer model was conducted. Approximation techniques involving quadratic least squares, cubic least squares, and constrained bicubic polynomial interpolation were compared for computed wave patterns and parameters in the region of Saco Bay, Maine. Although substantial local differences can be attributed to use of the different approximation techniques, results indicated that overall computed wave patterns and parameter distributions were quite similar.
Numerical solution of second order ODE directly by two point block backward differentiation formula
NASA Astrophysics Data System (ADS)
Zainuddin, Nooraini; Ibrahim, Zarina Bibi; Othman, Khairil Iskandar; Suleiman, Mohamed; Jamaludin, Noraini
2015-12-01
Direct Two Point Block Backward Differentiation Formula, (BBDF2) for solving second order ordinary differential equations (ODEs) will be presented throughout this paper. The method is derived by differentiating the interpolating polynomial using three back values. In BBDF2, two approximate solutions are produced simultaneously at each step of integration. The method derived is implemented by using fixed step size and the numerical results that follow demonstrate the advantage of the direct method as compared to the reduction method.
Close-range photogrammetry with video cameras
NASA Technical Reports Server (NTRS)
Burner, A. W.; Snow, W. L.; Goad, W. K.
1985-01-01
Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.
Close-Range Photogrammetry with Video Cameras
NASA Technical Reports Server (NTRS)
Burner, A. W.; Snow, W. L.; Goad, W. K.
1983-01-01
Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.
Mocan, Mehmet C; Ilhan, Hacer; Gurcay, Hasmet; Dikmetas, Ozlem; Karabulut, Erdem; Erdener, Ugur; Irkec, Murat
2014-06-01
To derive a mathematical expression for the healthy upper eyelid (UE) contour and to use this expression to differentiate the normal UE curve from its abnormal configuration in the setting of blepharoptosis. The study was designed as a cross-sectional study. Fifty healthy subjects (26M/24F) and 50 patients with blepharoptosis (28M/22F) with a margin-reflex distance (MRD1) of ≤2.5 mm were recruited. A polynomial interpolation was used to approximate UE curve. The polynomial coefficients were calculated from digital eyelid images of all participants using a set of operator defined points along the UE curve. Coefficients up to the fourth-order polynomial, iris area covered by the UE, iris area covered by the lower eyelid and total iris area covered by both the upper and the lower eyelids were defined using the polynomial function and used in statistical comparisons. The t-test, Mann-Whitney U test and the Spearman's correlation test were used for statistical comparisons. The mathematical expression derived from the data of 50 healthy subjects aged 24.1 ± 2.6 years was defined as y = 22.0915 + (-1.3213)x + 0.0318x(2 )+ (-0.0005x)(3). The fifth and the consecutive coefficients were <0.00001 in all cases and were not included in the polynomial function. None of the first fourth-order coefficients of the equation were found to be significantly different in male versus female subjects. In normal subjects, the percentage of the iris area covered by upper and lower lids was 6.46 ± 5.17% and 0.66% ± 1.62%, respectively. All coefficients and mean iris area covered by the UE were significantly different between healthy and ptotic eyelids. The healthy and abnormal eyelid contour can be defined and differentiated using a polynomial mathematical function.
Positivity-preserving numerical schemes for multidimensional advection
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Macvean, M. K.; Lock, A. P.
1993-01-01
This report describes the construction of an explicit, single time-step, conservative, finite-volume method for multidimensional advective flow, based on a uniformly third-order polynomial interpolation algorithm (UTOPIA). Particular attention is paid to the problem of flow-to-grid angle-dependent, anisotropic distortion typical of one-dimensional schemes used component-wise. The third-order multidimensional scheme automatically includes certain cross-difference terms that guarantee good isotropy (and stability). However, above first-order, polynomial-based advection schemes do not preserve positivity (the multidimensional analogue of monotonicity). For this reason, a multidimensional generalization of the first author's universal flux-limiter is sought. This is a very challenging problem. A simple flux-limiter can be found; but this introduces strong anisotropic distortion. A more sophisticated technique, limiting part of the flux and then restoring the isotropy-maintaining cross-terms afterwards, gives more satisfactory results. Test cases are confined to two dimensions; three-dimensional extensions are briefly discussed.
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)
2003-01-01
A method and system for design optimization that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The present invention employs a unique strategy called parameter-based partitioning of the given design space. In the design procedure, a sequence of composite response surfaces based on both neural networks and polynomial fits is used to traverse the design space to identify an optimal solution. The composite response surface has both the power of neural networks and the economy of low-order polynomials (in terms of the number of simulations needed and the network training requirements). The present invention handles design problems with many more parameters than would be possible using neural networks alone and permits a designer to rapidly perform a variety of trade-off studies before arriving at the final design.
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)
2007-01-01
A method and system for data modeling that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The invention partitions the parameters into a first set of s simple parameters, where observable data are expressible as low order polynomials, and c complex parameters that reflect more complicated variation of the observed data. Variation of the data with the simple parameters is modeled using polynomials; and variation of the data with the complex parameters at each vertex is analyzed using a neural network. Variations with the simple parameters and with the complex parameters are expressed using a first sequence of shape functions and a second sequence of neural network functions. The first and second sequences are multiplicatively combined to form a composite response surface, dependent upon the parameter values, that can be used to identify an accurate mode
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.
2011-12-01
Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.
Method of Determining the Aerodynamic Characteristics of a Flying Vehicle from the Surface Pressure
NASA Astrophysics Data System (ADS)
Volkov, V. F.; Dyad'kin, A. A.; Zapryagaev, V. I.; Kiselev, N. P.
2017-11-01
The paper presents a description of the procedure used for determining the aerodynamic characteristics (forces and moments acting on a model of a flying vehicle) obtained from the results of pressure measurements on the surface of a model of a re-entry vehicle with operating retrofire brake rockets in the regime of hovering over a landing surface is given. The algorithm for constructing the interpolation polynomial over interpolation nodes in the radial and azimuthal directions using the assumption on the symmetry of pressure distribution over the surface is presented. The aerodynamic forces and moments at different tilts of the vehicle are obtained. It is shown that the aerodynamic force components acting on the vehicle in the regime of landing and caused by the action of the vertical velocity deceleration nozzle jets are negligibly small in comparison with the engine thrust.
Turbine adapted maps for turbocharger engine matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tancrez, M.; Galindo, J.; Guardiola, C.
2011-01-15
This paper presents a new representation of the turbine performance maps oriented for turbocharger characterization. The aim of this plot is to provide a more compact and suited form to implement in engine simulation models and to interpolate data from turbocharger test bench. The new map is based on the use of conservative parameters as turbocharger power and turbine mass flow to describe the turbine performance in all VGT positions. The curves obtained are accurately fitted with quadratic polynomials and simple interpolation techniques give reliable results. Two turbochargers characterized in an steady flow rig were used for illustrating the representation.more » After being implemented in a turbocharger submodel, the results obtained with the model have been compared with success against turbine performance evaluated in engine tests cells. A practical application in turbocharger matching is also provided to show how this new map can be directly employed in engine design. (author)« less
Serial interpolation for secure membership testing and matching in a secret-split archive
Kroeger, Thomas M.; Benson, Thomas R.
2016-12-06
The various technologies presented herein relate to analyzing a plurality of shares stored at a plurality of repositories to determine whether a secret from which the shares were formed matches a term in a query. A threshold number of shares are formed with a generating polynomial operating on the secret. A process of serially interpolating the threshold number of shares can be conducted whereby a contribution of a first share is determined, a contribution of a second share is determined while seeded with the contribution of the first share, etc. A value of a final share in the threshold number of shares can be determined and compared with the search term. In the event of the value of the final share and the search term matching, the search term matches the secret in the file from which the shares are formed.
Human evaluation in association to the mathematical analysis of arch forms: Two-dimensional study.
Zabidin, Nurwahidah; Mohamed, Alizae Marny; Zaharim, Azami; Marizan Nor, Murshida; Rosli, Tanti Irawati
2018-03-01
To evaluate the relationship between human evaluation of the dental-arch form, to complete a mathematical analysis via two different methods in quantifying the arch form, and to establish agreement with the fourth-order polynomial equation. This study included 64 sets of digitised maxilla and mandible dental casts obtained from a sample of dental arch with normal occlusion. For human evaluation, a convenient sample of orthodontic practitioners ranked the photo images of dental cast from the most tapered to the less tapered (square). In the mathematical analysis, dental arches were interpolated using the fourth-order polynomial equation with millimetric acetate paper and AutoCAD software. Finally, the relations between human evaluation and mathematical objective analyses were evaluated. Human evaluations were found to be generally in agreement, but only at the extremes of tapered and square arch forms; this indicated general human error and observer bias. The two methods used to plot the arch form were comparable. The use of fourth-order polynomial equation may be facilitative in obtaining a smooth curve, which can produce a template for individual arch that represents all potential tooth positions for the dental arch. Copyright © 2018 CEO. Published by Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Gorji, Taha; Sertel, Elif; Tanik, Aysegul
2017-12-01
Soil management is an essential concern in protecting soil properties, in enhancing appropriate soil quality for plant growth and agricultural productivity, and in preventing soil erosion. Soil scientists and decision makers require accurate and well-distributed spatially continuous soil data across a region for risk assessment and for effectively monitoring and managing soils. Recently, spatial interpolation approaches have been utilized in various disciplines including soil sciences for analysing, predicting and mapping distribution and surface modelling of environmental factors such as soil properties. The study area selected in this research is Tuz Lake Basin in Turkey bearing ecological and economic importance. Fertile soil plays a significant role in agricultural activities, which is one of the main industries having great impact on economy of the region. Loss of trees and bushes due to intense agricultural activities in some parts of the basin lead to soil erosion. Besides, soil salinization due to both human-induced activities and natural factors has exacerbated its condition regarding agricultural land development. This study aims to compare capability of Local Polynomial Interpolation (LPI) and Radial Basis Functions (RBF) as two interpolation methods for mapping spatial pattern of soil properties including organic matter, phosphorus, lime and boron. Both LPI and RBF methods demonstrated promising results for predicting lime, organic matter, phosphorous and boron. Soil samples collected in the field were used for interpolation analysis in which approximately 80% of data was used for interpolation modelling whereas the remaining for validation of the predicted results. Relationship between validation points and their corresponding estimated values in the same location is examined by conducting linear regression analysis. Eight prediction maps generated from two different interpolation methods for soil organic matter, phosphorus, lime and boron parameters were examined based on R2 and RMSE values. The outcomes indicate that RBF performance in predicting lime, organic matter and boron put forth better results than LPI. However, LPI shows better results for predicting phosphorus.
Voltage scheduling for low power/energy
NASA Astrophysics Data System (ADS)
Manzak, Ali
2001-07-01
Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned to lower voltage levels (thus reducing the power consumption). A polynomial time resource and latency constrained scheduling algorithm is developed to distribute the available slack among the nodes such that power consumption is minimum. The algorithm is iterative and utilizes the slack based on the Lagrange multiplier method.
Fast algorithms for evaluating the stress field of dislocation lines in anisotropic elastic media
NASA Astrophysics Data System (ADS)
Chen, C.; Aubry, S.; Oppelstrup, T.; Arsenlis, A.; Darve, E.
2018-06-01
In dislocation dynamics (DD) simulations, the most computationally intensive step is the evaluation of the elastic interaction forces among dislocation ensembles. Because the pair-wise interaction between dislocations is long-range, this force calculation step can be significantly accelerated by the fast multipole method (FMM). We implemented and compared four different methods in isotropic and anisotropic elastic media: one based on the Taylor series expansion (Taylor FMM), one based on the spherical harmonics expansion (Spherical FMM), one kernel-independent method based on the Chebyshev interpolation (Chebyshev FMM), and a new kernel-independent method that we call the Lagrange FMM. The Taylor FMM is an existing method, used in ParaDiS, one of the most popular DD simulation softwares. The Spherical FMM employs a more compact multipole representation than the Taylor FMM does and is thus more efficient. However, both the Taylor FMM and the Spherical FMM are difficult to derive in anisotropic elastic media because the interaction force is complex and has no closed analytical formula. The Chebyshev FMM requires only being able to evaluate the interaction between dislocations and thus can be applied easily in anisotropic elastic media. But it has a relatively large memory footprint, which limits its usage. The Lagrange FMM was designed to be a memory-efficient black-box method. Various numerical experiments are presented to demonstrate the convergence and the scalability of the four methods.
Transform Decoding of Reed-Solomon Codes. Volume I. Algorithm and Signal Processing Structure
1982-11-01
systematic channel co.’e. 1. lake the inverse transform of the r- ceived se, - nee. 2. Isolate the error syndrome from the inverse transform and use... inverse transform is identic l with interpolation of the polynomial a(z) from its n values. In order to generate a Reed-Solomon (n,k) cooce, we let the set...in accordance with the transform of equation (4). If we were to apply the inverse transform of equa- tion (6) to the coefficient sequence of A(z), we
On the Use of a Mixed Gaussian/Finite-Element Basis Set for the Calculation of Rydberg States
NASA Technical Reports Server (NTRS)
Thuemmel, Helmar T.; Langhoff, Stephen (Technical Monitor)
1996-01-01
Configuration-interaction studies are reported for the Rydberg states of the helium atom using mixed Gaussian/finite-element (GTO/FE) one particle basis sets. Standard Gaussian valence basis sets are employed, like those, used extensively in quantum chemistry calculations. It is shown that the term values for high-lying Rydberg states of the helium atom can be obtained accurately (within 1 cm -1), even for a small GTO set, by augmenting the n-particle space with configurations, where orthonormalized interpolation polynomials are singly occupied.
Investigation of advanced UQ for CRUD prediction with VIPRE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldred, Michael Scott
2011-09-01
This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less
An Adaptive Prediction-Based Approach to Lossless Compression of Floating-Point Volume Data.
Fout, N; Ma, Kwan-Liu
2012-12-01
In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.
Molecular Dynamics Analysis of Lysozyme Protein in Ethanol-Water Mixed Solvent Environment
NASA Astrophysics Data System (ADS)
Ochije, Henry Ikechukwu
Effect of protein-solvent interaction on the protein structure is widely studied using both experimental and computational techniques. Despite such extensive studies molecular level understanding of proteins and some simple solvents is still not fully understood. This work focuses on detailed molecular dynamics simulations to study of solvent effect on lysozyme protein, using water, alcohol and different concentrations of water-alcohol mixtures as solvents. The lysozyme protein structure in water, alcohol and alcohol-water mixture (0-12% alcohol) was studied using GROMACS molecular dynamics simulation code. Compared to water environment, the lysozome structure showed remarkable changes in solvents with increasing alcohol concentration. In particular, significant changes were observed in the protein secondary structure involving alpha helices. The influence of alcohol on the lysozyme protein was investigated by studying thermodynamic and structural properties. With increasing ethanol concentration we observed a systematic increase in total energy, enthalpy, root mean square deviation (RMSD), and radius of gyration. a polynomial interpolation approach. Using the resulting polynomial equation, we could determine above quantities for any intermediate alcohol percentage. In order to validate this approach, we selected an intermediate ethanol percentage and carried out full MD simulation. The results from MD simulation were in reasonably good agreement with that obtained using polynomial approach. Hence, the polynomial approach based method proposed here eliminates the need for computationally intensive full MD analysis for the concentrations within the range (0-12%) studied in this work.
A Flight Control System for Small Unmanned Aerial Vehicle
NASA Astrophysics Data System (ADS)
Tunik, A. A.; Nadsadnaya, O. I.
2018-03-01
The program adaptation of the controller for the flight control system (FCS) of an unmanned aerial vehicle (UAV) is considered. Linearized flight dynamic models depend mainly on the true airspeed of the UAV, which is measured by the onboard air data system. This enables its use for program adaptation of the FCS over the full range of altitudes and velocities, which define the flight operating range. FCS with program adaptation, based on static feedback (SF), is selected. The SF parameters for every sub-range of the true airspeed are determined using the linear matrix inequality approach in the case of discrete systems for synthesis of a suboptimal robust H ∞-controller. The use of the Lagrange interpolation between true airspeed sub-ranges provides continuous adaptation. The efficiency of the proposed approach is shown against an example of the heading stabilization system.
MaxEnt alternatives to pearson family distributions
NASA Astrophysics Data System (ADS)
Stokes, Barrie J.
2012-05-01
In a previous MaxEnt conference [11] a method of obtaining MaxEnt univariate distributions under a variety of constraints was presented. The Mathematica function Interpolation[], normally used with numerical data, can also process "semi-symbolic" data, and Lagrange Multiplier equations were solved for a set of symbolic ordinates describing the required MaxEnt probability density function. We apply a more developed version of this approach to finding MaxEnt distributions having prescribed β1 and β2 values, and compare the entropy of the MaxEnt distribution to that of the Pearson family distribution having the same β1 and β2. These MaxEnt distributions do have, in general, greater entropy than the related Pearson distribution. In accordance with Jaynes' Maximum Entropy Principle, these MaxEnt distributions are thus to be preferred to the corresponding Pearson distributions as priors in Bayes' Theorem.
NASA Technical Reports Server (NTRS)
Kaljevic, Igor; Patnaik, Surya N.; Hopkins, Dale A.
1996-01-01
The Integrated Force Method has been developed in recent years for the analysis of structural mechanics problems. This method treats all independent internal forces as unknown variables that can be calculated by simultaneously imposing equations of equilibrium and compatibility conditions. In this paper a finite element library for analyzing two-dimensional problems by the Integrated Force Method is presented. Triangular- and quadrilateral-shaped elements capable of modeling arbitrary domain configurations are presented. The element equilibrium and flexibility matrices are derived by discretizing the expressions for potential and complementary energies, respectively. The displacement and stress fields within the finite elements are independently approximated. The displacement field is interpolated as it is in the standard displacement method, and the stress field is approximated by using complete polynomials of the correct order. A procedure that uses the definitions of stress components in terms of an Airy stress function is developed to derive the stress interpolation polynomials. Such derived stress fields identically satisfy the equations of equilibrium. Moreover, the resulting element matrices are insensitive to the orientation of local coordinate systems. A method is devised to calculate the number of rigid body modes, and the present elements are shown to be free of spurious zero-energy modes. A number of example problems are solved by using the present library, and the results are compared with corresponding analytical solutions and with results from the standard displacement finite element method. The Integrated Force Method not only gives results that agree well with analytical and displacement method results but also outperforms the displacement method in stress calculations.
Numerical solution of transport equation for applications in environmental hydraulics and hydrology
NASA Astrophysics Data System (ADS)
Rashidul Islam, M.; Hanif Chaudhry, M.
1997-04-01
The advective term in the one-dimensional transport equation, when numerically discretized, produces artificial diffusion. To minimize such artificial diffusion, which vanishes only for Courant number equal to unity, transport owing to advection has been modeled separately. The numerical solution of the advection equation for a Gaussian initial distribution is well established; however, large oscillations are observed when applied to an initial distribution with sleep gradients, such as trapezoidal distribution of a constituent or propagation of mass from a continuous input. In this study, the application of seven finite-difference schemes and one polynomial interpolation scheme is investigated to solve the transport equation for both Gaussian and non-Gaussian (trapezoidal) initial distributions. The results obtained from the numerical schemes are compared with the exact solutions. A constant advective velocity is assumed throughout the transport process. For a Gaussian distribution initial condition, all eight schemes give excellent results, except the Lax scheme which is diffusive. In application to the trapezoidal initial distribution, explicit finite-difference schemes prove to be superior to implicit finite-difference schemes because the latter produce large numerical oscillations near the steep gradients. The Warming-Kutler-Lomax (WKL) explicit scheme is found to be better among this group. The Hermite polynomial interpolation scheme yields the best result for a trapezoidal distribution among all eight schemes investigated. The second-order accurate schemes are sufficiently accurate for most practical problems, but the solution of unusual problems (concentration with steep gradient) requires the application of higher-order (e.g. third- and fourth-order) accurate schemes.
Usage of multivariate geostatistics in interpolation processes for meteorological precipitation maps
NASA Astrophysics Data System (ADS)
Gundogdu, Ismail Bulent
2017-01-01
Long-term meteorological data are very important both for the evaluation of meteorological events and for the analysis of their effects on the environment. Prediction maps which are constructed by different interpolation techniques often provide explanatory information. Conventional techniques, such as surface spline fitting, global and local polynomial models, and inverse distance weighting may not be adequate. Multivariate geostatistical methods can be more significant, especially when studying secondary variables, because secondary variables might directly affect the precision of prediction. In this study, the mean annual and mean monthly precipitations from 1984 to 2014 for 268 meteorological stations in Turkey have been used to construct country-wide maps. Besides linear regression, the inverse square distance and ordinary co-Kriging (OCK) have been used and compared to each other. Also elevation, slope, and aspect data for each station have been taken into account as secondary variables, whose use has reduced errors by up to a factor of three. OCK gave the smallest errors (1.002 cm) when aspect was included.
DeGregorio, Nicole; Iyengar, Srinivasan S
2018-01-09
We present two sampling measures to gauge critical regions of potential energy surfaces. These sampling measures employ (a) the instantaneous quantum wavepacket density, an approximation to the (b) potential surface, its (c) gradients, and (d) a Shannon information theory based expression that estimates the local entropy associated with the quantum wavepacket. These four criteria together enable a directed sampling of potential surfaces that appears to correctly describe the local oscillation frequencies, or the local Nyquist frequency, of a potential surface. The sampling functions are then utilized to derive a tessellation scheme that discretizes the multidimensional space to enable efficient sampling of potential surfaces. The sampled potential surface is then combined with four different interpolation procedures, namely, (a) local Hermite curve interpolation, (b) low-pass filtered Lagrange interpolation, (c) the monomial symmetrization approximation (MSA) developed by Bowman and co-workers, and (d) a modified Shepard algorithm. The sampling procedure and the fitting schemes are used to compute (a) potential surfaces in highly anharmonic hydrogen-bonded systems and (b) study hydrogen-transfer reactions in biogenic volatile organic compounds (isoprene) where the transferring hydrogen atom is found to demonstrate critical quantum nuclear effects. In the case of isoprene, the algorithm discussed here is used to derive multidimensional potential surfaces along a hydrogen-transfer reaction path to gauge the effect of quantum-nuclear degrees of freedom on the hydrogen-transfer process. Based on the decreased computational effort, facilitated by the optimal sampling of the potential surfaces through the use of sampling functions discussed here, and the accuracy of the associated potential surfaces, we believe the method will find great utility in the study of quantum nuclear dynamics problems, of which application to hydrogen-transfer reactions and hydrogen-bonded systems is demonstrated here.
Wang, Bo; Bao, Jianwei; Wang, Shikui; Wang, Houjun; Sheng, Qinghong
2017-01-01
Remote sensing images could provide us with tremendous quantities of large-scale information. Noise artifacts (stripes), however, made the images inappropriate for vitalization and batch process. An effective restoration method would make images ready for further analysis. In this paper, a new method is proposed to correct the stripes and bad abnormal pixels in charge-coupled device (CCD) linear array images. The method involved a line tracing method, limiting the location of noise to a rectangular region, and corrected abnormal pixels with the Lagrange polynomial algorithm. The proposed detection and restoration method were applied to Gaofen-1 satellite (GF-1) images, and the performance of this method was evaluated by omission ratio and false detection ratio, which reached 0.6% and 0%, respectively. This method saved 55.9% of the time, compared with traditional method. PMID:28441754
NASA Astrophysics Data System (ADS)
Wang, Yuewu; Wu, Dafang
2016-10-01
Dynamic response of an axially functionally graded (AFG) beam under thermal environment subjected to a moving harmonic load is investigated within the frameworks of classical beam theory (CBT) and Timoshenko beam theory (TBT). The Lagrange method is employed to derive the equations of thermal buckling for AFG beam, and then with the critical buckling temperature as a parameter the Newmark-β method is adopted to evaluate the dynamic response of AFG beam under thermal environments. Admissible functions denoting transverse displacement are expressed in simple algebraic polynomial forms. Temperature-dependency of material constituent is considered. The rule of mixture (Voigt model) and Mori-Tanaka (MT) scheme are used to evaluate the beam's effective material properties. A ceramic-metal AFG beam with immovable boundary condition is considered as numerical illustration to show the thermal effects on the dynamic behaviors of the beam subjected to a moving harmonic load.
The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.
Muller, A; Pontonnier, C; Dumont, G
2018-02-01
The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.
Trajectory Generation by Piecewise Spline Interpolation
1976-04-01
Lx) -a 0 + atx + aAx + x (21)0 1 2 3 and the coefficients are obtained from Equation (20) as ao m fl (22)i al " fi, (23) S3(fi + I f ) 2fj + fj+ 1 (24...reference frame to the vehicle fixed frame is pTO’ 0TO’ OTO’ *TO where a if (gZv0 - A >- 0 aCI (64) - azif (gzv0- AzvO < 0 These rotations may be...velocity frame axes directions (velocity frame from the output frame) aO, al , a 2 , a 3 Coefficients of the piecewise cubic polynomials [B ] Tridiagonal
NASA Astrophysics Data System (ADS)
Boscheri, Walter; Dumbser, Michael
2014-10-01
In this paper we present a new family of high order accurate Arbitrary-Lagrangian-Eulerian (ALE) one-step ADER-WENO finite volume schemes for the solution of nonlinear systems of conservative and non-conservative hyperbolic partial differential equations with stiff source terms on moving tetrahedral meshes in three space dimensions. A WENO reconstruction technique is used to achieve high order of accuracy in space, while an element-local space-time Discontinuous Galerkin finite element predictor on moving curved meshes is used to obtain a high order accurate one-step time discretization. Within the space-time predictor the physical element is mapped onto a reference element using a high order isoparametric approach, where the space-time basis and test functions are given by the Lagrange interpolation polynomials passing through a predefined set of space-time nodes. Since our algorithm is cell-centered, the final mesh motion is computed by using a suitable node solver algorithm. A rezoning step as well as a flattener strategy are used in some of the test problems to avoid mesh tangling or excessive element deformations that may occur when the computation involves strong shocks or shear waves. The ALE algorithm presented in this article belongs to the so-called direct ALE methods because the final Lagrangian finite volume scheme is based directly on a space-time conservation formulation of the governing PDE system, with the rezoned geometry taken already into account during the computation of the fluxes. We apply our new high order unstructured ALE schemes to the 3D Euler equations of compressible gas dynamics, for which a set of classical numerical test problems has been solved and for which convergence rates up to sixth order of accuracy in space and time have been obtained. We furthermore consider the equations of classical ideal magnetohydrodynamics (MHD) as well as the non-conservative seven-equation Baer-Nunziato model of compressible multi-phase flows with stiff relaxation source terms.
A high-order spatial filter for a cubed-sphere spectral element model
NASA Astrophysics Data System (ADS)
Kang, Hyun-Gyu; Cheong, Hyeong-Bin
2017-04-01
A high-order spatial filter is developed for the spectral-element-method dynamical core on the cubed-sphere grid which employs the Gauss-Lobatto Lagrange interpolating polynomials (GLLIP) as orthogonal basis functions. The filter equation is the high-order Helmholtz equation which corresponds to the implicit time-differencing of a diffusion equation employing the high-order Laplacian. The Laplacian operator is discretized within a cell which is a building block of the cubed sphere grid and consists of the Gauss-Lobatto grid. When discretizing a high-order Laplacian, due to the requirement of C0 continuity along the cell boundaries the grid-points in neighboring cells should be used for the target cell: The number of neighboring cells is nearly quadratically proportional to the filter order. Discrete Helmholtz equation yields a huge-sized and highly sparse matrix equation whose size is N*N with N the number of total grid points on the globe. The number of nonzero entries is also almost in quadratic proportion to the filter order. Filtering is accomplished by solving the huge-matrix equation. While requiring a significant computing time, the solution of global matrix provides the filtered field free of discontinuity along the cell boundaries. To achieve the computational efficiency and the accuracy at the same time, the solution of the matrix equation was obtained by only accounting for the finite number of adjacent cells. This is called as a local-domain filter. It was shown that to remove the numerical noise near the grid-scale, inclusion of 5*5 cells for the local-domain filter was found sufficient, giving the same accuracy as that obtained by global domain solution while reducing the computing time to a considerably lower level. The high-order filter was evaluated using the standard test cases including the baroclinic instability of the zonal flow. Results indicated that the filter performs better on the removal of grid-scale numerical noises than the explicit high-order viscosity. It was also presented that the filter can be easily implemented on the distributed-memory parallel computers with a desirable scalability.
Motsa, S. S.; Magagula, V. M.; Sibanda, P.
2014-01-01
This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252
Motsa, S S; Magagula, V M; Sibanda, P
2014-01-01
This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.
Parametrizing the Reionization History with the Redshift Midpoint, Duration, and Asymmetry
NASA Astrophysics Data System (ADS)
Trac, Hy
2018-05-01
A new parametrization of the reionization history is presented to facilitate robust comparisons between different observations and with theory. The evolution of the ionization fraction with redshift can be effectively captured by specifying the midpoint, duration, and asymmetry parameters. Lagrange interpolating functions are then used to construct analytical curves that exactly fit corresponding ionization points. The shape parametrizations are excellent matches to theoretical results from radiation-hydrodynamic simulations. The comparative differences for reionization observables are: ionization fraction | {{Δ }}{x}{{i}}| ≲ 0.03, 21 cm brightness temperature | {{Δ }}{T}{{b}}| ≲ 0.7 {mK}, Thomson optical depth | {{Δ }}τ | ≲ 0.001, and patchy kinetic Sunyaev–Zel’dovich angular power | {{Δ }}{D}{\\ell }| ≲ 0.1 μ {{{K}}}2. This accurate and flexible approach will allow parameter-space studies and self-consistent constraints on the reionization history from 21 cm, cosmic microwave background (CMB), and high-redshift galaxies and quasars.
Mauda, R.; Pinchas, M.
2014-01-01
Recently a new blind equalization method was proposed for the 16QAM constellation input inspired by the maximum entropy density approximation technique with improved equalization performance compared to the maximum entropy approach, Godard's algorithm, and others. In addition, an approximated expression for the minimum mean square error (MSE) was obtained. The idea was to find those Lagrange multipliers that bring the approximated MSE to minimum. Since the derivation of the obtained MSE with respect to the Lagrange multipliers leads to a nonlinear equation for the Lagrange multipliers, the part in the MSE expression that caused the nonlinearity in the equation for the Lagrange multipliers was ignored. Thus, the obtained Lagrange multipliers were not those Lagrange multipliers that bring the approximated MSE to minimum. In this paper, we derive a new set of Lagrange multipliers based on the nonlinear expression for the Lagrange multipliers obtained from minimizing the approximated MSE with respect to the Lagrange multipliers. Simulation results indicate that for the high signal to noise ratio (SNR) case, a faster convergence rate is obtained for a channel causing a high initial intersymbol interference (ISI) while the same equalization performance is obtained for an easy channel (initial ISI low). PMID:24723813
A collocation--Galerkin finite element model of cardiac action potential propagation.
Rogers, J M; McCulloch, A D
1994-08-01
A new computational method was developed for modeling the effects of the geometric complexity, nonuniform muscle fiber orientation, and material inhomogeneity of the ventricular wall on cardiac impulse propagation. The method was used to solve a modification to the FitzHugh-Nagumo system of equations. The geometry, local muscle fiber orientation, and material parameters of the domain were defined using linear Lagrange or cubic Hermite finite element interpolation. Spatial variations of time-dependent excitation and recovery variables were approximated using cubic Hermite finite element interpolation, and the governing finite element equations were assembled using the collocation method. To overcome the deficiencies of conventional collocation methods on irregular domains, Galerkin equations for the no-flux boundary conditions were used instead of collocation equations for the boundary degrees-of-freedom. The resulting system was evolved using an adaptive Runge-Kutta method. Converged two-dimensional simulations of normal propagation showed that this method requires less CPU time than a traditional finite difference discretization. The model also reproduced several other physiologic phenomena known to be important in arrhythmogenesis including: Wenckebach periodicity, slowed propagation and unidirectional block due to wavefront curvature, reentry around a fixed obstacle, and spiral wave reentry. In a new result, we observed wavespeed variations and block due to nonuniform muscle fiber orientation. The findings suggest that the finite element method is suitable for studying normal and pathological cardiac activation and has significant advantages over existing techniques.
Mapping wildfire effects on Ca2+ and Mg2+ released from ash. A microplot analisis.
NASA Astrophysics Data System (ADS)
Pereira, Paulo; Úbeda, Xavier; Martin, Deborah
2010-05-01
Wildland fires have important implications in ecosystems dynamic. Their effects depends on many biophysical components, mainly burned specie, ecosystem affected, amount and spatial distribution of the fuel, relative humidity, slope, aspect and time of residence. These parameters are heterogenic across the landscape, producing a complex mosaic of severities. Wildland fires have a heterogenic impact on ecosystems due their diverse biophysical features. It is widely known that fire impacts can change rapidly even in short distances, producing at microplot scale highly spatial variation. Also after a fire, the most visible thing is ash and his physical and chemical properties are of main importance because here reside the majority of the available nutrients available to the plants. Considering this idea, is of major importance, study their characteristics in order to observe the type and amount of elements available to plants. This study is focused on the study of the spatial variability of two nutrients essential to plant growth, Ca2+ and Mg2+, released from ash after a wildfire at microplot scale. The impacts of fire are highly variable even small distances. This creates many problems at the hour of map the effects of fire in the release of the studied elements. Hence is of major priority identify the less biased interpolation method in order to predict with great accuracy the variable in study. The aim of this study is map the effects of wildfire on the referred elements released from ash at microplot scale, testing several interpolation methods. 16 interpolation techniques were tested, Inverse Distance to a Weight (IDW), with the with the weights of 1,2, 3, 4 and 5, Local Polynomial, with the power of 1 (LP1) and 2 (LP2), Polynomial Regression (PR), Radial Basis Functions, especially, Spline With Tension (SPT), Completely Regularized Spline (CRS), Multiquadratic (MTQ), Inverse Multiquadratic (MTQ), and Thin Plate Spline (TPS). Also geostatistical methods were tested from Kriging family, mainly Ordinary Kriging (OK), Simple Kriging (SK) and Universal Kriging (UK). Interpolation techniques were assessed throughout the Mean Error (ME) and Root Mean Square (RMSE), obtained from the cross validation procedure calculated in all methods. The fire occurred in Portugal, near an urban area and inside the affected area we designed a grid with the dimensions of 9 x 27 m and we collected 40 samples. Before modelling data, we tested their normality with the Shapiro Wilk test. Since the distributions of Ca2+ and Mg2+ did not respect the gaussian distribution we transformed data logarithmically (Ln). With this transformation, data respect the normality and spatial distribution was modelled with the transformed data. On average in the entire plot the ash slurries contained 4371.01 mg/l of Ca2+, however with a higher coefficient of variation (CV%) of 54.05%. From all the tested methods LP1 was the less biased and hence the most accurate to interpolate this element. The most biased was LP2. In relation to Mg2+, considering the entire plot, the ash released in solution on average 1196.01 mg/l, with a CV% of 52.36%, similar to the identified in Ca2+. The best interpolator in this case was SK and the most biased was LP1 and TPS. Comparing all methods in both elements, the quality of the interpolations was higher in Ca2+. These results allowed us to conclude that to achieve the best prediction it is necessary test a wide range of interpolation methods. The best accuracy will permit us to understand with more precision where the studied elements are more available and accessible to plant growth and ecosystem recovers. This spatial pattern of both nutrients is related with ash pH and burned severity evaluated from ash colour and CaCO3 content. These aspects will be also discussed in the work.
Norman, Matthew R.
2014-11-24
New Hermite Weighted Essentially Non-Oscillatory (HWENO) interpolants are developed and investigated within the Multi-Moment Finite-Volume (MMFV) formulation using the ADER-DT time discretization. Whereas traditional WENO methods interpolate pointwise, function-based WENO methods explicitly form a non-oscillatory, high-order polynomial over the cell in question. This study chooses a function-based approach and details how fast convergence to optimal weights for smooth flow is ensured. Methods of sixth-, eighth-, and tenth-order accuracy are developed. We compare these against traditional single-moment WENO methods of fifth-, seventh-, ninth-, and eleventh-order accuracy to compare against more familiar methods from literature. The new HWENO methods improve upon existingmore » HWENO methods (1) by giving a better resolution of unreinforced contact discontinuities and (2) by only needing a single HWENO polynomial to update both the cell mean value and cell mean derivative. Test cases to validate and assess these methods include 1-D linear transport, the 1-D inviscid Burger's equation, and the 1-D inviscid Euler equations. Smooth and non-smooth flows are used for evaluation. These HWENO methods performed better than comparable literature-standard WENO methods for all regimes of discontinuity and smoothness in all tests herein. They exhibit improved optimal accuracy due to the use of derivatives, and they collapse to solutions similar to typical WENO methods when limiting is required. The study concludes that the new HWENO methods are robust and effective when used in the ADER-DT MMFV framework. Finally, these results are intended to demonstrate capability rather than exhaust all possible implementations.« less
Technical Note: spektr 3.0-A computational tool for x-ray spectrum modeling and analysis.
Punnoose, J; Xu, J; Sisniega, A; Zbijewski, W; Siewerdsen, J H
2016-08-01
A computational toolkit (spektr 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a matlab (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. The spektr code generates x-ray spectra (photons/mm(2)/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins over beam energies 20-150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30-140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. The computational toolkit, spektr, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the spektr function library, UI, and optimization tool are available.
Quadrature, Interpolation and Observability
NASA Technical Reports Server (NTRS)
Hodges, Lucille McDaniel
1997-01-01
Methods of interpolation and quadrature have been used for over 300 years. Improvements in the techniques have been made by many, most notably by Gauss, whose technique applied to polynomials is referred to as Gaussian Quadrature. Stieltjes extended Gauss's method to certain non-polynomial functions as early as 1884. Conditions that guarantee the existence of quadrature formulas for certain collections of functions were studied by Tchebycheff, and his work was extended by others. Today, a class of functions which satisfies these conditions is called a Tchebycheff System. This thesis contains the definition of a Tchebycheff System, along with the theorems, proofs, and definitions necessary to guarantee the existence of quadrature formulas for such systems. Solutions of discretely observable linear control systems are of particular interest, and observability with respect to a given output function is defined. The output function is written as a linear combination of a collection of orthonormal functions. Orthonormal functions are defined, and their properties are discussed. The technique for evaluating the coefficients in the output function involves evaluating the definite integral of functions which can be shown to form a Tchebycheff system. Therefore, quadrature formulas for these integrals exist, and in many cases are known. The technique given is useful in cases where the method of direct calculation is unstable. The condition number of a matrix is defined and shown to be an indication of the the degree to which perturbations in data affect the accuracy of the solution. In special cases, the number of data points required for direct calculation is the same as the number required by the method presented in this thesis. But the method is shown to require more data points in other cases. A lower bound for the number of data points required is given.
Three-dimensional trend mapping from wire-line logs
Doveton, J.H.; Ke-an, Z.
1985-01-01
Mapping of lithofacies and porosities of stratigraphic units is complicated because these properties vary in three dimensions. The method of moments was proposed by Krumbein and Libby (1957) as a technique to aid in resolving this problem. Moments are easily computed from wireline logs and are simple statistics which summarize vertical variation in a log trace. Combinations of moment maps have proved useful in understanding vertical and lateral changes in lithology of sedimentary rock units. Although moments have meaning both as statistical descriptors and as mechanical properties, they also define polynomial curves which approximate lithologic changes as a function of depth. These polynomials can be fitted by least-squares methods, partitioning major trends in rock properties from finescale fluctuations. Analysis of variance yields the degree of fit of any polynomial and measures the proportion of vertical variability expressed by any moment or combination of moments. In addition, polynomial curves can be differentiated to determine depths at which pronounced expressions of facies occur and to determine the locations of boundaries between major lithologic subdivisions. Moments can be estimated at any location in an area by interpolating from log moments at control wells. A matrix algebra operation then converts moment estimates to coefficients of a polynomial function which describes a continuous curve of lithologic variation with depth. If this procedure is applied to a grid of geographic locations, the result is a model of variability in three dimensions. Resolution of the model is determined largely by number of moments used in its generation. The method is illustrated with an analysis of lithofacies in the Simpson Group of south-central Kansas; the three-dimensional model is shown as cross sections and slice maps. In this study, the gamma-ray log is used as a measure of shaliness of the unit. However, the method is general and can be applied, for example, to suites of neutron, density, or sonic logs to produce three-dimensional models of porosity in reservoir rocks. ?? 1985 Plenum Publishing Corporation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V., E-mail: truhlar@umn.edu, E-mail: candler@aem.umn.edu
2014-02-07
Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with amore » review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.« less
Element Library for Three-Dimensional Stress Analysis by the Integrated Force Method
NASA Technical Reports Server (NTRS)
Kaljevic, Igor; Patnaik, Surya N.; Hopkins, Dale A.
1996-01-01
The Integrated Force Method, a recently developed method for analyzing structures, is extended in this paper to three-dimensional structural analysis. First, a general formulation is developed to generate the stress interpolation matrix in terms of complete polynomials of the required order. The formulation is based on definitions of the stress tensor components in term of stress functions. The stress functions are written as complete polynomials and substituted into expressions for stress components. Then elimination of the dependent coefficients leaves the stress components expressed as complete polynomials whose coefficients are defined as generalized independent forces. Such derived components of the stress tensor identically satisfy homogenous Navier equations of equilibrium. The resulting element matrices are invariant with respect to coordinate transformation and are free of spurious zero-energy modes. The formulation provides a rational way to calculate the exact number of independent forces necessary to arrive at an approximation of the required order for complete polynomials. The influence of reducing the number of independent forces on the accuracy of the response is also analyzed. The stress fields derived are used to develop a comprehensive finite element library for three-dimensional structural analysis by the Integrated Force Method. Both tetrahedral- and hexahedral-shaped elements capable of modeling arbitrary geometric configurations are developed. A number of examples with known analytical solutions are solved by using the developments presented herein. The results are in good agreement with the analytical solutions. The responses obtained with the Integrated Force Method are also compared with those generated by the standard displacement method. In most cases, the performance of the Integrated Force Method is better overall.
NASA Astrophysics Data System (ADS)
Jaenisch, Holger; Handley, James
2013-06-01
We introduce a generalized numerical prediction and forecasting algorithm. We have previously published it for malware byte sequence feature prediction and generalized distribution modeling for disparate test article analysis. We show how non-trivial non-periodic extrapolation of a numerical sequence (forecast and backcast) from the starting data is possible. Our ancestor-progeny prediction can yield new options for evolutionary programming. Our equations enable analytical integrals and derivatives to any order. Interpolation is controllable from smooth continuous to fractal structure estimation. We show how our generalized trigonometric polynomial can be derived using a Fourier transform.
Uniform high order spectral methods for one and two dimensional Euler equations
NASA Technical Reports Server (NTRS)
Cai, Wei; Shu, Chi-Wang
1991-01-01
Uniform high order spectral methods to solve multi-dimensional Euler equations for gas dynamics are discussed. Uniform high order spectral approximations with spectral accuracy in smooth regions of solutions are constructed by introducing the idea of the Essentially Non-Oscillatory (ENO) polynomial interpolations into the spectral methods. The authors present numerical results for the inviscid Burgers' equation, and for the one dimensional Euler equations including the interactions between a shock wave and density disturbance, Sod's and Lax's shock tube problems, and the blast wave problem. The interaction between a Mach 3 two dimensional shock wave and a rotating vortex is simulated.
NASA Astrophysics Data System (ADS)
Badillo-Olvera, A.; Begovich, O.; Peréz-González, A.
2017-01-01
The present paper is motivated by the purpose of detection and isolation of a single leak considering the Fault Model Approach (FMA) focused on pipelines with changes in their geometry. These changes generate a different pressure drop that those produced by the friction, this phenomenon is a common scenario in real pipeline systems. The problem arises, since the dynamical model of the fluid in a pipeline only considers straight geometries without fittings. In order to address this situation, several papers work with a virtual model of a pipeline that generates a equivalent straight length, thus, friction produced by the fittings is taking into account. However, when this method is applied, the leak is isolated in a virtual length, which for practical reasons does not represent a complete solution. This research proposes as a solution to the problem of leak isolation in a virtual length, the use of a polynomial interpolation function in order to approximate the conversion of the virtual position to a real-coordinates value. Experimental results in a real prototype are shown, concluding that the proposed methodology has a good performance.
Does preprocessing change nonlinear measures of heart rate variability?
Gomes, Murilo E D; Guimarães, Homero N; Ribeiro, Antônio L P; Aguirre, Luis A
2002-11-01
This work investigated if methods used to produce a uniformly sampled heart rate variability (HRV) time series significantly change the deterministic signature underlying the dynamics of such signals and some nonlinear measures of HRV. Two methods of preprocessing were used: the convolution of inverse interval function values with a rectangular window and the cubic polynomial interpolation. The HRV time series were obtained from 33 Wistar rats submitted to autonomic blockade protocols and from 17 healthy adults. The analysis of determinism was carried out by the method of surrogate data sets and nonlinear autoregressive moving average modelling and prediction. The scaling exponents alpha, alpha(1) and alpha(2) derived from the detrended fluctuation analysis were calculated from raw HRV time series and respective preprocessed signals. It was shown that the technique of cubic interpolation of HRV time series did not significantly change any nonlinear characteristic studied in this work, while the method of convolution only affected the alpha(1) index. The results suggested that preprocessed time series may be used to study HRV in the field of nonlinear dynamics.
Method for Pre-Conditioning a Measured Surface Height Map for Model Validation
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2012-01-01
This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.
A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor
Tayara, Hilal; Ham, Woonchul; Chong, Kil To
2016-01-01
This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation. PMID:27983714
A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor.
Tayara, Hilal; Ham, Woonchul; Chong, Kil To
2016-12-15
This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation.
NASA Astrophysics Data System (ADS)
Cuzzone, Joshua K.; Morlighem, Mathieu; Larour, Eric; Schlegel, Nicole; Seroussi, Helene
2018-05-01
Paleoclimate proxies are being used in conjunction with ice sheet modeling experiments to determine how the Greenland ice sheet responded to past changes, particularly during the last deglaciation. Although these comparisons have been a critical component in our understanding of the Greenland ice sheet sensitivity to past warming, they often rely on modeling experiments that favor minimizing computational expense over increased model physics. Over Paleoclimate timescales, simulating the thermal structure of the ice sheet has large implications on the modeled ice viscosity, which can feedback onto the basal sliding and ice flow. To accurately capture the thermal field, models often require a high number of vertical layers. This is not the case for the stress balance computation, however, where a high vertical resolution is not necessary. Consequently, since stress balance and thermal equations are generally performed on the same mesh, more time is spent on the stress balance computation than is otherwise necessary. For these reasons, running a higher-order ice sheet model (e.g., Blatter-Pattyn) over timescales equivalent to the paleoclimate record has not been possible without incurring a large computational expense. To mitigate this issue, we propose a method that can be implemented within ice sheet models, whereby the vertical interpolation along the z axis relies on higher-order polynomials, rather than the traditional linear interpolation. This method is tested within the Ice Sheet System Model (ISSM) using quadratic and cubic finite elements for the vertical interpolation on an idealized case and a realistic Greenland configuration. A transient experiment for the ice thickness evolution of a single-dome ice sheet demonstrates improved accuracy using the higher-order vertical interpolation compared to models using the linear vertical interpolation, despite having fewer degrees of freedom. This method is also shown to improve a model's ability to capture sharp thermal gradients in an ice sheet particularly close to the bed, when compared to models using a linear vertical interpolation. This is corroborated in a thermal steady-state simulation of the Greenland ice sheet using a higher-order model. In general, we find that using a higher-order vertical interpolation decreases the need for a high number of vertical layers, while dramatically reducing model runtime for transient simulations. Results indicate that when using a higher-order vertical interpolation, runtimes for a transient ice sheet relaxation are upwards of 5 to 7 times faster than using a model which has a linear vertical interpolation, and this thus requires a higher number of vertical layers to achieve a similar result in simulated ice volume, basal temperature, and ice divide thickness. The findings suggest that this method will allow higher-order models to be used in studies investigating ice sheet behavior over paleoclimate timescales at a fraction of the computational cost than would otherwise be needed for a model using a linear vertical interpolation.
TDIGG - TWO-DIMENSIONAL, INTERACTIVE GRID GENERATION CODE
NASA Technical Reports Server (NTRS)
Vu, B. T.
1994-01-01
TDIGG is a fast and versatile program for generating two-dimensional computational grids for use with finite-difference flow-solvers. Both algebraic and elliptic grid generation systems are included. The method for grid generation by algebraic transformation is based on an interpolation algorithm and the elliptic grid generation is established by solving the partial differential equation (PDE). Non-uniform grid distributions are carried out using a hyperbolic tangent stretching function. For algebraic grid systems, interpolations in one direction (univariate) and two directions (bivariate) are considered. These interpolations are associated with linear or cubic Lagrangian/Hermite/Bezier polynomial functions. The algebraic grids can subsequently be smoothed using an elliptic solver. For elliptic grid systems, the PDE can be in the form of Laplace (zero forcing function) or Poisson. The forcing functions in the Poisson equation come from the boundary or the entire domain of the initial algebraic grids. A graphics interface procedure using the Silicon Graphics (GL) Library is included to allow users to visualize the grid variations at each iteration. This will allow users to interactively modify the grid to match their applications. TDIGG is written in FORTRAN 77 for Silicon Graphics IRIS series computers running IRIX. This package requires either MIT's X Window System, Version 11 Revision 4 or SGI (Motif) Window System. A sample executable is provided on the distribution medium. It requires 148K of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. This program was developed in 1992.
Voltage-controlled IPMC actuators for accommodating intra-ocular lens systems
NASA Astrophysics Data System (ADS)
Horiuchi, Tetsuya; Mihashi, Toshifumi; Fujikado, Takashi; Oshika, Tetsuro; Asaka, Kinji
2017-04-01
An ion polymer-metal composite (IPMC) actuator has unique performance characteristics that were applied in this study for use within the eye. Cataracts are a common eye disease causing clouding of the lens. To treat cataracts, surgeons replace clouded lenses with intraocular lenses (IOLs). However, patients who receive this treatment must still wear reading glasses for tasks requiring close-up vision. We suggest a new voltage-controlled accommodating IOL consisting of an IPMC actuator to change the lens’ focus. We examined the relationship between the displacement performance of an IPMC actuator and the accommodating range of the IOL using in vitro experiments. We show that this system has an accommodating range of approximately 1.15 D under an applied voltage of ±1.2 V. By Lagrange interpolation, we estimate that with an IPMC actuator displacement of 0.14 mm, we can achieve a refractive power of 4 D, which is equivalent to the accommodating range of a 40 year old person.
FFT applications to plane-polar near-field antenna measurements
NASA Technical Reports Server (NTRS)
Gatti, Mark S.; Rahmat-Samii, Yahya
1988-01-01
The four-point bivariate Lagrange interpolation algorithm was applied to near-field antenna data measured in a plane-polar facility. The results were sufficiently accurate to permit the use of the FFT (fast Fourier transform) algorithm to calculate the far-field patterns of the antenna. Good agreement was obtained between the far-field patterns as calculated by the Jacobi-Bessel and the FFT algorithms. The significant advantage in using the FFT is in the calculation of the principal plane cuts, which may be made very quickly. Also, the application of the FFT algorithm directly to the near-field data was used to perform surface holographic diagnosis of a reflector antenna. The effects due to the focusing of the emergent beam from the reflector, as well as the effects of the information in the wide-angle regions, are shown. The use of the plane-polar near-field antenna test range has therfore been expanded to include these useful FFT applications.
Optimal design of compact spur gear reductions
NASA Technical Reports Server (NTRS)
Savage, M.; Lattime, S. B.; Kimmel, J. A.; Coe, H. H.
1992-01-01
The optimal design of compact spur gear reductions includes the selection of bearing and shaft proportions in addition to gear mesh parameters. Designs for single mesh spur gear reductions are based on optimization of system life, system volume, and system weight including gears, support shafts, and the four bearings. The overall optimization allows component properties to interact, yielding the best composite design. A modified feasible directions search algorithm directs the optimization through a continuous design space. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for optimization. After finding the continuous optimum, the designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearings on the optimal configurations.
Spectroscopic evidence supporting the gravitational lens hypothesis for 1635+267 A,B
NASA Technical Reports Server (NTRS)
Turner, Edwin L.; Hillenbrand, Lynne A.; Schneider, Donald P.; Hewitt, Jacqueline N.; Burke, Bernard F.
1988-01-01
The gravitational lens hypothesis is tested for 1613+267 A,B by comparing the detailed line widths and shapes of the 2799-A Mg II and semiforbidden 1909-A C-III lines in each component. Following subtraction of an interpolating polynomial fit to the continua and the determination of a single optimum scaling factor (an amplification ratio of 2.83), reasonable agreement between the profiles of both lines in the two composites is obtained. Comparison of these lines to those in an unrelated quasar with a similar redshift and apparent magnitude does not produce a good match. It is suggested that the observed match in the 1635+267 A,B spectra arises from gravitational lensing.
Schulze, H Georg; Turner, Robin F B
2013-04-01
Raman spectra often contain undesirable, randomly positioned, intense, narrow-bandwidth, positive, unidirectional spectral features generated when cosmic rays strike charge-coupled device cameras. These must be removed prior to analysis, but doing so manually is not feasible for large data sets. We developed a quick, simple, effective, semi-automated procedure to remove cosmic ray spikes from spectral data sets that contain large numbers of relatively homogenous spectra. Although some inhomogeneous spectral data sets can be accommodated--it requires replacing excessively modified spectra with the originals and removing their spikes with a median filter instead--caution is advised when processing such data sets. In addition, the technique is suitable for interpolating missing spectra or replacing aberrant spectra with good spectral estimates. The method is applied to baseline-flattened spectra and relies on fitting a third-order (or higher) polynomial through all the spectra at every wavenumber. Pixel intensities in excess of a threshold of 3× the noise standard deviation above the fit are reduced to the threshold level. Because only two parameters (with readily specified default values) might require further adjustment, the method is easily implemented for semi-automated processing of large spectral sets.
Anisotropic k-essence cosmologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chimento, Luis P.; Forte, Monica
We investigate a Bianchi type-I cosmology with k-essence and find the set of models which dissipate the initial anisotropy. There are cosmological models with extended tachyon fields and k-essence having a constant barotropic index. We obtain the conditions leading to a regular bounce of the average geometry and the residual anisotropy on the bounce. For constant potential, we develop purely kinetic k-essence models which are dust dominated in their early stages, dissipate the initial anisotropy, and end in a stable de Sitter accelerated expansion scenario. We show that linear k-field and polynomial kinetic function models evolve asymptotically to Friedmann-Robertson-Walker cosmologies.more » The linear case is compatible with an asymptotic potential interpolating between V{sub l}{proportional_to}{phi}{sup -{gamma}{sub l}}, in the shear dominated regime, and V{sub l}{proportional_to}{phi}{sup -2} at late time. In the polynomial case, the general solution contains cosmological models with an oscillatory average geometry. For linear k-essence, we find the general solution in the Bianchi type-I cosmology when the k field is driven by an inverse square potential. This model shares the same geometry as a quintessence field driven by an exponential potential.« less
Spline approximation, Part 1: Basic methodology
NASA Astrophysics Data System (ADS)
Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar
2018-04-01
In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.
Nested polynomial trends for the improvement of Gaussian process-based predictors
NASA Astrophysics Data System (ADS)
Perrin, G.; Soize, C.; Marque-Pucheu, S.; Garnier, J.
2017-10-01
The role of simulation keeps increasing for the sensitivity analysis and the uncertainty quantification of complex systems. Such numerical procedures are generally based on the processing of a huge amount of code evaluations. When the computational cost associated with one particular evaluation of the code is high, such direct approaches based on the computer code only, are not affordable. Surrogate models have therefore to be introduced to interpolate the information given by a fixed set of code evaluations to the whole input space. When confronted to deterministic mappings, the Gaussian process regression (GPR), or kriging, presents a good compromise between complexity, efficiency and error control. Such a method considers the quantity of interest of the system as a particular realization of a Gaussian stochastic process, whose mean and covariance functions have to be identified from the available code evaluations. In this context, this work proposes an innovative parametrization of this mean function, which is based on the composition of two polynomials. This approach is particularly relevant for the approximation of strongly non linear quantities of interest from very little information. After presenting the theoretical basis of this method, this work compares its efficiency to alternative approaches on a series of examples.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-08-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-01-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784
Technical Note: spektr 3.0—A computational tool for x-ray spectrum modeling and analysis
Punnoose, J.; Xu, J.; Sisniega, A.; Zbijewski, W.; Siewerdsen, J. H.
2016-01-01
Purpose: A computational toolkit (spektr 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a matlab (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. Methods: The spektr code generates x-ray spectra (photons/mm2/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins over beam energies 20–150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. Results: The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30–140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. Conclusions: The computational toolkit, spektr, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the spektr function library, UI, and optimization tool are available. PMID:27487888
Technical Note: SPEKTR 3.0—A computational tool for x-ray spectrum modeling and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Punnoose, J.; Xu, J.; Sisniega, A.
2016-08-15
Purpose: A computational toolkit (SPEKTR 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a MATLAB (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. Methods: The SPEKTR code generates x-ray spectra (photons/mm{sup 2}/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins overmore » beam energies 20–150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. Results: The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30–140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. Conclusions: The computational toolkit, SPEKTR, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the SPEKTR function library, UI, and optimization tool are available.« less
An exact general remeshing scheme applied to physically conservative voxelization
Powell, Devon; Abel, Tom
2015-05-21
We present an exact general remeshing scheme to compute analytic integrals of polynomial functions over the intersections between convex polyhedral cells of old and new meshes. In physics applications this allows one to ensure global mass, momentum, and energy conservation while applying higher-order polynomial interpolation. We elaborate on applications of our algorithm arising in the analysis of cosmological N-body data, computer graphics, and continuum mechanics problems. We focus on the particular case of remeshing tetrahedral cells onto a Cartesian grid such that the volume integral of the polynomial density function given on the input mesh is guaranteed to equal themore » corresponding integral over the output mesh. We refer to this as “physically conservative voxelization.” At the core of our method is an algorithm for intersecting two convex polyhedra by successively clipping one against the faces of the other. This algorithm is an implementation of the ideas presented abstractly by Sugihara [48], who suggests using the planar graph representations of convex polyhedra to ensure topological consistency of the output. This makes our implementation robust to geometric degeneracy in the input. We employ a simplicial decomposition to calculate moment integrals up to quadratic order over the resulting intersection domain. We also address practical issues arising in a software implementation, including numerical stability in geometric calculations, management of cancellation errors, and extension to two dimensions. In a comparison to recent work, we show substantial performance gains. We provide a C implementation intended to be a fast, accurate, and robust tool for geometric calculations on polyhedral mesh elements.« less
Boolean Operations with Prism Algebraic Patches
Bajaj, Chandrajit; Paoluzzi, Alberto; Portuesi, Simone; Lei, Na; Zhao, Wenqi
2009-01-01
In this paper we discuss a symbolic-numeric algorithm for Boolean operations, closed in the algebra of curved polyhedra whose boundary is triangulated with algebraic patches (A-patches). This approach uses a linear polyhedron as a first approximation of both the arguments and the result. On each triangle of a boundary representation of such linear approximation, a piecewise cubic algebraic interpolant is built, using a C1-continuous prism algebraic patch (prism A-patch) that interpolates the three triangle vertices, with given normal vectors. The boundary representation only stores the vertices of the initial triangulation and their external vertex normals. In order to represent also flat and/or sharp local features, the corresponding normal-per-face and/or normal-per-edge may be also given, respectively. The topology is described by storing, for each curved triangle, the two triples of pointers to incident vertices and to adjacent triangles. For each triangle, a scaffolding prism is built, produced by its extreme vertices and normals, which provides a containment volume for the curved interpolating A-patch. When looking for the result of a regularized Boolean operation, the 0-set of a tri-variate polynomial within each such prism is generated, and intersected with the analogous 0-sets of the other curved polyhedron, when two prisms have non-empty intersection. The intersection curves of the boundaries are traced and used to decompose each boundary into the 3 standard classes of subpatches, denoted in, out and on. While tracing the intersection curves, the locally refined triangulation of intersecting patches is produced, and added to the boundary representation. PMID:21516262
NASA Astrophysics Data System (ADS)
Tirani, M. D.; Maleki, M.; Kajani, M. T.
2014-11-01
A numerical method for solving the Lane-Emden equations of the polytropic index α when 4.75 ≤ α ≤ 5 is introduced. The method is based upon nonclassical Gauss-Radau collocation points and Freud type weights. Nonclassical orthogonal polynomials, nonclassical Radau points and weighted interpolation are introduced and are utilized in the interval [0,1]. A smooth, strictly monotonic transformation is used to map the infinite domain x ∈ [0,∞) onto a half-open interval t ∈ [0,1). The resulting problem on the finite interval is then transcribed to a system of nonlinear algebraic equations using collocation. The method is easy to implement and yields very accurate results.
NASA Astrophysics Data System (ADS)
Pereira, P.; Pundyte, N.; Vaitkute, D.; Cepanko, V.; Pranskevicius, M.; Ubeda, X.; Mataix-Solera, J.; Cerda, A.
2012-04-01
Fire can affect significantly soil moisture (SM) and water repellency (WR) in the immediate period after the fire due the effect of the temperatures into soil profile and ash. This impact can be very heterogeneous, even in small distances, due to different conditions of combustion (e.g. fuel and soil moisture, fuel amount and type, distribution and connection, and geomorphological variables as aspect and slope) that influences fire temperature and severity. The aim of this work it is study the spatial distribution of SM and WR in a small plot (400 m2 with a sampling distance of 5 m) immediately after the a low severity grassland fire.. This was made in a burned but also in a control (unburned) plot as reference to can compare. In each plot we analyzed a total of 25 samples. SM was measured gravimetrically and WR with the water drop penetration time test (WDPT). Several interpolation methods were tested in order to identify the best predictor of SM and WR, as the Inverse Distance to a Weight (IDW) (with the power of 1,2,3,4 and 5), Local Polynomial with the first and second polynomial order, Polynomial Regression (PR), Radial Basis Functions (RBF) as Multilog (MTG), Natural Cubic Spline (NCS), Multiquadratic (MTQ), Inverse Multiquadratic (IMTQ) and Thin Plate Spline (TPS) and Ordinary Kriging. Interpolation accuracy was observed with the cross-validation method that is achieved by taking each observation in turn out of the sample and estimating from the remaining ones. The errors produced in each interpolation allowed us to calculate the Root Mean Square Error (RMSE). The best method is the one that showed the lower RMSE. The results showed that on average the SM in the control plot was 13.59 % (±2.83) and WR 2.9 (±1.3) seconds (s). The majority of the soils (88%) were hydrophilic (WDPT <5s). SM in the control plot showed a weak negative relationship with WR (r=-0.33, p<0.10). The coefficient of variation (CV%) of SM was 20.77% and SW of 44.62%. In the burned plot, SM was 14.17% (±2.83) and WR of 151 (±99) seconds (s). All the samples analysed were considered hydrophobic (WDPT >5s). We did not identify significant relationships among the variables (r=0.06, p>0.05) and the CV% was higher in WR (65.85%) than SM (19.96%). Overall we identified no significant changes in SM between plots, which means that fire did not had important implications on soil water content, contrary to observed in WR. The same dynamic was observed in the CV%. Among all tested methods the most accurate to interpolate SM, in the control plot IDW 1 and in the burned plot IDW 2, and this means that fire did not induce important inferences on the spatial distribution of SM. In WR, in the control plot, the best predictor was NCS and in the burned plot was IDW 1 and this means that spatial distribution WR was substantially affected by fire. In this case we observed an increase of the small scale variability in the burned area. Currently we are monitoring this burned area and observing the evaluation of the spatial variability of these two soil properties. It is important to observe their dynamic in the space and time and observe if fire will have medium and long term implications on SM and WR. Discussions about the results will be carried out during the poster session.
What Did We Think Could Be Learned About Earth From Lagrange Point Observations?
NASA Technical Reports Server (NTRS)
Wiscombe, Warren
2011-01-01
The scientific excitement surrounding the NASA Lagrange point mission Triana, now called DSCOVR, tended to be forgotten in the brouhaha over other aspects of the mission. Yet a small band of scientists in 1998 got very excited about the possibilities offered by the Lagrange-point perspective on our planet. As one of the original co-investigators on the Triana mission, I witnessed that scientific excitement firsthand. I will bring to life the early period, circa 1998 to 2000, and share the reasons that we thought the Lagrange-point perspective on Earth would be scientifically revolutionary.
NASA Astrophysics Data System (ADS)
Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi
2018-06-01
Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.
Rosen, Joseph; Kelner, Roy
2014-11-17
The Lagrange invariant is a well-known law for optical imaging systems formulated in the frame of ray optics. In this study, we reformulate this law in terms of wave optics and relate it to the resolution limits of various imaging systems. Furthermore, this modified Lagrange invariant is generalized for imaging along the z axis, resulting with the axial Lagrange invariant which can be used to analyze the axial resolution of various imaging systems. To demonstrate the effectiveness of the theory, analysis of the lateral and the axial imaging resolutions is provided for Fresnel incoherent correlation holography (FINCH) systems.
Seismoelectric Effects based on Spectral-Element Method for Subsurface Fluid Characterization
NASA Astrophysics Data System (ADS)
Morency, C.
2017-12-01
Present approaches for subsurface imaging rely predominantly on seismic techniques, which alone do not capture fluid properties and related mechanisms. On the other hand, electromagnetic (EM) measurements add constraints on the fluid phase through electrical conductivity and permeability, but EM signals alone do not offer information of the solid structural properties. In the recent years, there have been many efforts to combine both seismic and EM data for exploration geophysics. The most popular approach is based on joint inversion of seismic and EM data, as decoupled phenomena, missing out the coupled nature of seismic and EM phenomena such as seismoeletric effects. Seismoelectric effects are related to pore fluid movements with respect to the solid grains. By analyzing coupled poroelastic seismic and EM signals, one can capture a pore scale behavior and access both structural and fluid properties.Here, we model the seismoelectric response by solving the governing equations derived by Pride and Garambois (1994), which correspond to Biot's poroelastic wave equations and Maxwell's electromagnetic wave equations coupled electrokinetically. We will show that these coupled wave equations can be numerically implemented by taking advantage of viscoelastic-electromagnetic mathematical equivalences. These equations will be solved using a spectral-element method (SEM). The SEM, in contrast to finite-element methods (FEM) uses high degree Lagrange polynomials. Not only does this allow the technique to handle complex geometries similarly to FEM, but it also retains exponential convergence and accuracy due to the use of high degree polynomials. Finally, we will discuss how this is a first step toward full coupled seismic-EM inversion to improve subsurface fluid characterization. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Efficient Development of High Fidelity Structured Volume Grids for Hypersonic Flow Simulations
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
2003-01-01
A new technique for the control of grid line spacing and intersection angles of a structured volume grid, using elliptic partial differential equations (PDEs) is presented. Existing structured grid generation algorithms make use of source term hybridization to provide control of grid lines, imposing orthogonality implicitly at the boundary and explicitly on the interior of the domain. A bridging function between the two types of grid line control is typically used to blend the different orthogonality formulations. It is shown that utilizing such a bridging function with source term hybridization can result in the excessive use of computational resources and diminishes robustness. A new approach, Anisotropic Lagrange Based Trans-Finite Interpolation (ALBTFI), is offered as a replacement to source term hybridization. The ALBTFI technique captures the essence of the desired grid controls while improving the convergence rate of the elliptic PDEs when compared with source term hybridization. Grid generation on a blunt cone and a Shuttle Orbiter is used to demonstrate and assess the ALBTFI technique, which is shown to be as much as 50% faster, more robust, and produces higher quality grids than source term hybridization.
Convergence Analysis of Triangular MAC Schemes for Two Dimensional Stokes Equations
Wang, Ming; Zhong, Lin
2015-01-01
In this paper, we consider the use of H(div) elements in the velocity–pressure formulation to discretize Stokes equations in two dimensions. We address the error estimate of the element pair RT0–P0, which is known to be suboptimal, and render the error estimate optimal by the symmetry of the grids and by the superconvergence result of Lagrange inter-polant. By enlarging RT0 such that it becomes a modified BDM-type element, we develop a new discretization BDM1b–P0. We, therefore, generalize the classical MAC scheme on rectangular grids to triangular grids and retain all the desirable properties of the MAC scheme: exact divergence-free, solver-friendly, and local conservation of physical quantities. Further, we prove that the proposed discretization BDM1b–P0 achieves the optimal convergence rate for both velocity and pressure on general quasi-uniform grids, and one and half order convergence rate for the vorticity and a recovered pressure. We demonstrate the validity of theories developed here by numerical experiments. PMID:26041948
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagiannis, Georgios; Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesianmore » model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.« less
NASA Astrophysics Data System (ADS)
Elgohary, T.; Kim, D.; Turner, J.; Junkins, J.
2014-09-01
Several methods exist for integrating the motion in high order gravity fields. Some recent methods use an approximate starting orbit, and an efficient method is needed for generating warm starts that account for specific low order gravity approximations. By introducing two scalar Lagrange-like invariants and employing Leibniz product rule, the perturbed motion is integrated by a novel recursive formulation. The Lagrange-like invariants allow exact arbitrary order time derivatives. Restricting attention to the perturbations due to the zonal harmonics J2 through J6, we illustrate an idea. The recursively generated vector-valued time derivatives for the trajectory are used to develop a continuation series-based solution for propagating position and velocity. Numerical comparisons indicate performance improvements of ~ 70X over existing explicit Runge-Kutta methods while maintaining mm accuracy for the orbit predictions. The Modified Chebyshev Picard Iteration (MCPI) is an iterative path approximation method to solve nonlinear ordinary differential equations. The MCPI utilizes Picard iteration with orthogonal Chebyshev polynomial basis functions to recursively update the states. The key advantages of the MCPI are as follows: 1) Large segments of a trajectory can be approximated by evaluating the forcing function at multiple nodes along the current approximation during each iteration. 2) It can readily handle general gravity perturbations as well as non-conservative forces. 3) Parallel applications are possible. The Picard sequence converges to the solution over large time intervals when the forces are continuous and differentiable. According to the accuracy of the starting solutions, however, the MCPI may require significant number of iterations and function evaluations compared to other integrators. In this work, we provide an efficient methodology to establish good starting solutions from the continuation series method; this warm start improves the performance of the MCPI significantly and will likely be useful for other applications where efficiently computed approximate orbit solutions are needed.
H2+, HeH and H2: Approximating potential curves, calculating rovibrational states
NASA Astrophysics Data System (ADS)
Olivares-Pilón, Horacio; Turbiner, Alexander V.
2018-06-01
Analytic consideration of the Bohr-Oppenheimer (BO) potential curves for diatomic molecules is proposed: accurate analytic interpolation for a potential curve consistent with its rovibrational spectra is found. It is shown that in the BO approximation for four lowest electronic states 1 sσg and 2 pσu, 2 pπu and 3 dπg of H2+, the ground state X2Σ+ of HeH and the two lowest states 1 Σg+ and 3 Σu+ of H2, the potential curves can be analytically interpolated in full range of internuclear distances R with not less than 4-5-6 s.d. Approximation based on matching the Laurant-type expansion at small R and a combination of the multipole expansion with one-instanton type contribution at large distances R is given by two-point Padé approximant. The position of minimum, when exists, is predicted within 1% or better. For the molecular ion H2+ in the Lagrange mesh method, the spectra of vibrational, rotational and rovibrational states (ν , L) associated with 1 sσg and 2 pσu, 2 pπu and 3 dπg potential curves are calculated. In general, it coincides with spectra found via numerical solution of the Schrödinger equation (when available) within six s.d. It is shown that 1 sσg curve contains 19 vibrational states (ν , 0) , while 2 pσu curve contains a single one (0 , 0) and 2 pπu state contains 12 vibrational states (ν , 0) . In general, 1 sσg electronic curve contains 420 rovibrational states, which increases up to 423 when we are beyond BO approximation. For the state 2 pσu the total number of rovibrational states (all with ν = 0) is equal to 3, within or beyond Bohr-Oppenheimer approximation. As for the state 2 pπu within the Bohr-Oppenheimer approximation the total number of the rovibrational bound states is equal to 284. The state 3 dπg is repulsive, no rovibrational state is found. It is confirmed in Lagrange mesh formalism the statement that the ground state potential curve of the heteronuclear molecule HeH does not support rovibrational states. Accurate analytical expression for the potential curves of the hydrogen molecule H2 for the states 1Σg+ and 3 Σu+ is presented. The ground state 1 Σg+ contains 15 vibrational states (ν , 0) , ν = 0- 14. In general, this state supports 301 rovibrational states. The potential curve of the state 3Σu+ has a shallow minimum: it does not support any rovibrational state, it is repulsive.
Integración automatizada de las ecuaciones de Lagrange en el movimiento orbital.
NASA Astrophysics Data System (ADS)
Abad, A.; San Juan, J. F.
The new techniques of algebraic manipulation, especially the Poisson Series Processor, permit the analytical integration of the more and more complex problems of celestial mechanics. The authors are developing a new Poisson Series Processor, PSPC, and they use it to solve the Lagrange equation of the orbital motion. They integrate the Lagrange equation by using the stroboscopic method, and apply it to the main problem of the artificial satellite theory.
An accurate method for computer-generating tungsten anode x-ray spectra from 30 to 140 kV.
Boone, J M; Seibert, J A
1997-11-01
A tungsten anode spectral model using interpolating polynomials (TASMIP) was used to compute x-ray spectra at 1 keV intervals over the range from 30 kV to 140 kV. The TASMIP is not semi-empirical and uses no physical assumptions regarding x-ray production, but rather interpolates measured constant potential x-ray spectra published by Fewell et al. [Handbook of Computed Tomography X-ray Spectra (U.S. Government Printing Office, Washington, D.C., 1981)]. X-ray output measurements (mR/mAs measured at 1 m) were made on a calibrated constant potential generator in our laboratory from 50 kV to 124 kV, and with 0-5 mm added aluminum filtration. The Fewell spectra were slightly modified (numerically hardened) and normalized based on the attenuation and output characteristics of a constant potential generator and metal-insert x-ray tube in our laboratory. Then, using the modified Fewell spectra of different kVs, the photon fluence phi at each 1 keV energy bin (E) over energies from 10 keV to 140 keV was characterized using polynomial functions of the form phi (E) = a0[E] + a1[E] kV + a2[E] kV2 + ... + a(n)[E] kVn. A total of 131 polynomial functions were used to calculate accurate x-ray spectra, each function requiring between two and four terms. The resulting TASMIP algorithm produced x-ray spectra that match both the quality and quantity characteristics of the x-ray system in our laboratory. For photon fluences above 10% of the peak fluence in the spectrum, the average percent difference (and standard deviation) between the modified Fewell spectra and the TASMIP photon fluence was -1.43% (3.8%) for the 50 kV spectrum, -0.89% (1.37%) for the 70 kV spectrum, and for the 80, 90, 100, 110, 120, 130 and 140 kV spectra, the mean differences between spectra were all less than 0.20% and the standard deviations were less than approximately 1.1%. The model was also extended to include the effects of generator-induced kV ripple. Finally, the x-ray photon fluence in the units of photons/mm2 per mR was calculated as a function of HVL, kV, and ripple factor, for various (water-equivalent) patient thicknesses (0, 10, 20, and 30 cm). These values may be useful for computing the detective quantum efficiency, DQE(f), of x-ray detector systems. The TASMIP algorithm and ancillary data are made available on line at http:/(/)www.aip.org/epaps/epaps.html.
1. EXTERIOR VIEW OF 209 WARE STREET LOOKING SOUTH. THIS ...
1. EXTERIOR VIEW OF 209 WARE STREET LOOKING SOUTH. THIS STRUCTURE WAS ONE OF APPROXIMATELY SEVENTEEN DUPLEXES BUILT AS THE ORIGINAL WORKER HOUSING FOR THE LaGRANGE COTTON MILLS, LATER KNOWN AS CALUMET MILL. LaGRANGE MILLS (1888-89) WAS THE FIRST COTTON MILL IN LaGRANGE. NOTE THE GABLE-ON-HIP ROOF FORM AND TWO IDENTICAL STRUCTURES VISIBLE TO THE LEFT. - 209 Ware Street (House), 209 Ware Street, La Grange, Troup County, GA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook
2015-03-07
We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal tomore » 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems.« less
Variational Integrators for Interconnected Lagrange-Dirac Systems
NASA Astrophysics Data System (ADS)
Parks, Helen; Leok, Melvin
2017-10-01
Interconnected systems are an important class of mathematical models, as they allow for the construction of complex, hierarchical, multiphysics, and multiscale models by the interconnection of simpler subsystems. Lagrange-Dirac mechanical systems provide a broad category of mathematical models that are closed under interconnection, and in this paper, we develop a framework for the interconnection of discrete Lagrange-Dirac mechanical systems, with a view toward constructing geometric structure-preserving discretizations of interconnected systems. This work builds on previous work on the interconnection of continuous Lagrange-Dirac systems (Jacobs and Yoshimura in J Geom Mech 6(1):67-98, 2014) and discrete Dirac variational integrators (Leok and Ohsawa in Found Comput Math 11(5), 529-562, 2011). We test our results by simulating some of the continuous examples given in Jacobs and Yoshimura (2014).
B-spline Method in Fluid Dynamics
NASA Technical Reports Server (NTRS)
Botella, Olivier; Shariff, Karim; Mansour, Nagi N. (Technical Monitor)
2001-01-01
B-spline functions are bases for piecewise polynomials that possess attractive properties for complex flow simulations : they have compact support, provide a straightforward handling of boundary conditions and grid nonuniformities, and yield numerical schemes with high resolving power, where the order of accuracy is a mere input parameter. This paper reviews the progress made on the development and application of B-spline numerical methods to computational fluid dynamics problems. Basic B-spline approximation properties is investigated, and their relationship with conventional numerical methods is reviewed. Some fundamental developments towards efficient complex geometry spline methods are covered, such as local interpolation methods, fast solution algorithms on cartesian grid, non-conformal block-structured discretization, formulation of spline bases of higher continuity over triangulation, and treatment of pressure oscillations in Navier-Stokes equations. Application of some of these techniques to the computation of viscous incompressible flows is presented.
NASA Astrophysics Data System (ADS)
Gao, Kun; Yang, Hu; Chen, Xiaomei; Ni, Guoqiang
2008-03-01
Because of complex thermal objects in an infrared image, the prevalent image edge detection operators are often suitable for a certain scene and extract too wide edges sometimes. From a biological point of view, the image edge detection operators work reliably when assuming a convolution-based receptive field architecture. A DoG (Difference-of- Gaussians) model filter based on ON-center retinal ganglion cell receptive field architecture with artificial eye tremors introduced is proposed for the image contour detection. Aiming at the blurred edges of an infrared image, the subsequent orthogonal polynomial interpolation and sub-pixel level edge detection in rough edge pixel neighborhood is adopted to locate the foregoing rough edges in sub-pixel level. Numerical simulations show that this method can locate the target edge accurately and robustly.
Light sterile neutrinos and inflationary freedom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gariazzo, S.; Giunti, C.; Laveder, M., E-mail: gariazzo@to.infn.it, E-mail: giunti@to.infn.it, E-mail: laveder@pd.infn.it
2015-04-01
We perform a cosmological analysis in which we allow the primordial power spectrum of scalar perturbations to assume a shape that is different from the usual power-law predicted by the simplest models of cosmological inflation. We parameterize the free primordial power spectrum with a ''piecewise cubic Hermite interpolating polynomial'' (PCHIP). We consider a 3+1 neutrino mixing model with a sterile neutrino having a mass at the eV scale, which can explain the anomalies observed in short-baseline neutrino oscillation experiments. We find that the freedom of the primordial power spectrum allows to reconcile the cosmological data with a fully thermalized sterilemore » neutrino in the early Universe. Moreover, the cosmological analysis gives us some information on the shape of the primordial power spectrum, which presents a feature around the wavenumber k=0.002 Mpc{sup −1}.« less
The Hodge-Elliptic Genus, Spinning BPS States, and Black Holes
NASA Astrophysics Data System (ADS)
Kachru, Shamit; Tripathy, Arnav
2017-10-01
We perform a refined count of BPS states in the compactification of M-theory on {K3 × T^2}, keeping track of the information provided by both the {SU(2)_L} and {SU(2)_R} angular momenta in the SO(4) little group. Mathematically, this four variable counting function may be expressed via the motivic Donaldson-Thomas counts of {K3 × T^2}, simultaneously refining Katz, Klemm, and Pandharipande's motivic stable pairs counts on K3 and Oberdieck-Pandharipande's Gromov-Witten counts on {K3 × T^2}. This provides the first full answer for motivic curve counts of a compact Calabi-Yau threefold. Along the way, we develop a Hodge-elliptic genus for Calabi-Yau manifolds—a new counting function for BPS states that interpolates between the Hodge polynomial and the elliptic genus of a Calabi-Yau.
Truncated Calogero-Sutherland models
NASA Astrophysics Data System (ADS)
Pittman, S. M.; Beau, M.; Olshanii, M.; del Campo, A.
2017-05-01
A one-dimensional quantum many-body system consisting of particles confined in a harmonic potential and subject to finite-range two-body and three-body inverse-square interactions is introduced. The range of the interactions is set by truncation beyond a number of neighbors and can be tuned to interpolate between the Calogero-Sutherland model and a system with nearest and next-nearest neighbors interactions discussed by Jain and Khare. The model also includes the Tonks-Girardeau gas describing impenetrable bosons as well as an extension with truncated interactions. While the ground state wave function takes a truncated Bijl-Jastrow form, collective modes of the system are found in terms of multivariable symmetric polynomials. We numerically compute the density profile, one-body reduced density matrix, and momentum distribution of the ground state as a function of the range r and the interaction strength.
1. STREETSCAPE VIEW OF 208 VINE STREET (FIRST HOUSE ON ...
1. STREETSCAPE VIEW OF 208 VINE STREET (FIRST HOUSE ON RIGHT) LOOKING WEST. THIS STRUCTURE WAS ONE OF APPROXIMATELY SEVENTEEN DUPLEXES BUILT AS THE ORIGINAL WORKER HOUSING FOR THE LaGRANGE COTTON MILLS, LATER KNOWN AS CALUMET MILL. LaGRANGE MILLS (1888-89) WAS THE FIRST COTTON MILL IN LaGRANGE. NOTE THE GABLE-ON-HIP ROOF FORM AND IDENTICAL STRUCTURES FACING EACH OTHER ALONG BOTH SIDES OF THE NARROW STREET. - 208 Vine Street (House), 208 Vine Street, La Grange, Troup County, GA
NASA Astrophysics Data System (ADS)
Nelson, Daniel A.; Jacobs, Gustaaf B.; Kopriva, David A.
2016-08-01
The effect of curved-boundary representation on the physics of the separated flow over a NACA 65(1)-412 airfoil is thoroughly investigated. A method is presented to approximate curved boundaries with a high-order discontinuous-Galerkin spectral element method for the solution of the Navier-Stokes equations. Multiblock quadrilateral element meshes are constructed with the grid generation software GridPro. The boundary of a NACA 65(1)-412 airfoil, defined by a cubic natural spline, is piecewise-approximated by isoparametric polynomial interpolants that represent the edges of boundary-fitted elements. Direct numerical simulation of the airfoil is performed on a coarse mesh and fine mesh with polynomial orders ranging from four to twelve. The accuracy of the curve fitting is investigated by comparing the flows computed on curved-sided meshes with those given by straight-sided meshes. Straight-sided meshes yield irregular wakes, whereas curved-sided meshes produce a regular Karman street wake. Straight-sided meshes also produce lower lift and higher viscous drag as compared with curved-sided meshes. When the mesh is refined by reducing the sizes of the elements, the lift decrease and viscous drag increase are less pronounced. The differences in the aerodynamic performance between the straight-sided meshes and the curved-sided meshes are concluded to be the result of artificial surface roughness introduced by the piecewise-linear boundary approximation provided by the straight-sided meshes.
Multi-Party Privacy-Preserving Set Intersection with Quasi-Linear Complexity
NASA Astrophysics Data System (ADS)
Cheon, Jung Hee; Jarecki, Stanislaw; Seo, Jae Hong
Secure computation of the set intersection functionality allows n parties to find the intersection between their datasets without revealing anything else about them. An efficient protocol for such a task could have multiple potential applications in commerce, health care, and security. However, all currently known secure set intersection protocols for n>2 parties have computational costs that are quadratic in the (maximum) number of entries in the dataset contributed by each party, making secure computation of the set intersection only practical for small datasets. In this paper, we describe the first multi-party protocol for securely computing the set intersection functionality with both the communication and the computation costs that are quasi-linear in the size of the datasets. For a fixed security parameter, our protocols require O(n2k) bits of communication and Õ(n2k) group multiplications per player in the malicious adversary setting, where k is the size of each dataset. Our protocol follows the basic idea of the protocol proposed by Kissner and Song, but we gain efficiency by using different representations of the polynomials associated with users' datasets and careful employment of algorithms that interpolate or evaluate polynomials on multiple points more efficiently. Moreover, the proposed protocol is robust. This means that the protocol outputs the desired result even if some corrupted players leave during the execution of the protocol.
NASA Astrophysics Data System (ADS)
Lohmann, Christoph; Kuzmin, Dmitri; Shadid, John N.; Mabuza, Sibusiso
2017-09-01
This work extends the flux-corrected transport (FCT) methodology to arbitrary order continuous finite element discretizations of scalar conservation laws on simplex meshes. Using Bernstein polynomials as local basis functions, we constrain the total variation of the numerical solution by imposing local discrete maximum principles on the Bézier net. The design of accuracy-preserving FCT schemes for high order Bernstein-Bézier finite elements requires the development of new algorithms and/or generalization of limiting techniques tailored for linear and multilinear Lagrange elements. In this paper, we propose (i) a new discrete upwinding strategy leading to local extremum bounded low order approximations with compact stencils, (ii) high order variational stabilization based on the difference between two gradient approximations, and (iii) new localized limiting techniques for antidiffusive element contributions. The optional use of a smoothness indicator, based on a second derivative test, makes it possible to potentially avoid unnecessary limiting at smooth extrema and achieve optimal convergence rates for problems with smooth solutions. The accuracy of the proposed schemes is assessed in numerical studies for the linear transport equation in 1D and 2D.
NASA Astrophysics Data System (ADS)
Wang, Fengwen
2018-05-01
This paper presents a systematic approach for designing 3D auxetic lattice materials, which exhibit constant negative Poisson's ratios over large strain intervals. A unit cell model mimicking tensile tests is established and based on the proposed model, the secant Poisson's ratio is defined as the negative ratio between the lateral and the longitudinal engineering strains. The optimization problem for designing a material unit cell with a target Poisson's ratio is formulated to minimize the average lateral engineering stresses under the prescribed deformations. Numerical results demonstrate that 3D auxetic lattice materials with constant Poisson's ratios can be achieved by the proposed optimization formulation and that two sets of material architectures are obtained by imposing different symmetry on the unit cell. Moreover, inspired by the topology-optimized material architecture, a subsequent shape optimization is proposed by parametrizing material architectures using super-ellipsoids. By designing two geometrical parameters, simple optimized material microstructures with different target Poisson's ratios are obtained. By interpolating these two parameters as polynomial functions of Poisson's ratios, material architectures for any Poisson's ratio in the interval of ν ∈ [ - 0.78 , 0.00 ] are explicitly presented. Numerical evaluations show that interpolated auxetic lattice materials exhibit constant Poisson's ratios in the target strain interval of [0.00, 0.20] and that 3D auxetic lattice material architectures with programmable Poisson's ratio are achievable.
NASA Astrophysics Data System (ADS)
Andreaus, Ugo; Spagnuolo, Mario; Lekszycki, Tomasz; Eugster, Simon R.
2018-04-01
We present a finite element discrete model for pantographic lattices, based on a continuous Euler-Bernoulli beam for modeling the fibers composing the pantographic sheet. This model takes into account large displacements, rotations and deformations; the Euler-Bernoulli beam is described by using nonlinear interpolation functions, a Green-Lagrange strain for elongation and a curvature depending on elongation. On the basis of the introduced discrete model of a pantographic lattice, we perform some numerical simulations. We then compare the obtained results to an experimental BIAS extension test on a pantograph printed with polyamide PA2200. The pantographic structures involved in the numerical as well as in the experimental investigations are not proper fabrics: They are composed by just a few fibers for theoretically allowing the use of the Euler-Bernoulli beam theory in the description of the fibers. We compare the experiments to numerical simulations in which we allow the fibers to elastically slide one with respect to the other in correspondence of the interconnecting pivot. We present as result a very good agreement between the numerical simulation, based on the introduced model, and the experimental measures.
Further Improvement in 3DGRAPE
NASA Technical Reports Server (NTRS)
Alter, Stephen
2004-01-01
3DGRAPE/AL:V2 denotes version 2 of the Three-Dimensional Grids About Anything by Poisson's Equation with Upgrades from Ames and Langley computer program. The preceding version, 3DGRAPE/AL, was described in Improved 3DGRAPE (ARC-14069) NASA Tech Briefs, Vol. 21, No. 5 (May 1997), page 66. These programs are so named because they generate volume grids by iteratively solving Poisson's Equation in three dimensions. The grids generated by the various versions of 3DGRAPE have been used in computational fluid dynamics (CFD). The main novel feature of 3DGRAPE/AL:V2 is the incorporation of an optional scheme in which anisotropic Lagrange-based trans-finite interpolation (ALBTFI) is coupled with exponential decay functions to compute and blend interior source terms. In the input to 3DGRAPE/AL:V2 the user can specify whether or not to invoke ALBTFI in combination with exponential-decay controls, angles, and cell size for controlling the character of grid lines. Of the known programs that solve elliptic partial differential equations for generating grids, 3DGRAPE/AL:V2 is the only code that offers a combination of speed and versatility with most options for controlling the densities and other characteristics of grids for CFD.
The nonlinear aeroelastic characteristics of a folding wing with cubic stiffness
NASA Astrophysics Data System (ADS)
Hu, Wei; Yang, Zhichun; Gu, Yingsong; Wang, Xiaochen
2017-07-01
This paper focuses on the nonlinear aeroelastic characteristics of a folding wing in the quasi-steady condition (namely at fixed folding angles) and during the morphing process. The structure model of the folding wing is formulated by the Lagrange equations, and the constraint equation is used to describe the morphing strategy. The aerodynamic influence coefficient matrices at several folding angles are calculated by the Doublet Lattice method, and described as rational functions in the Laplace domain by the rational function approximation, and then the Kriging agent model technique is adopted to interpolate the coefficient matrices of the rational functions, and the aerodynamics model of the folding wing during the morphing process is built. The aeroelastic responses of the folding wing with cubic stiffness are simulated, and the results show that the motion types of aeroelastic responses in the quasi-steady condition and during the morphing process are all sensitive to the initial condition and folding angle. During the morphing process, the transition of the motion types is observed. And apart from the period of transition, the aeroelastic response at some folding angles may exhibit different motion types, which can be found from the results in the quasi-steady condition.
A three-dimensional nonlinear Timoshenko beam based on the core-congruential formulation
NASA Technical Reports Server (NTRS)
Crivelli, Luis A.; Felippa, Carlos A.
1992-01-01
A three-dimensional, geometrically nonlinear two-node Timoshenkoo beam element based on the total Larangrian description is derived. The element behavior is assumed to be linear elastic, but no restrictions are placed on magnitude of finite rotations. The resulting element has twelve degrees of freedom: six translational components and six rotational-vector components. The formulation uses the Green-Lagrange strains and second Piola-Kirchhoff stresses as energy-conjugate variables and accounts for the bending-stretching and bending-torsional coupling effects without special provisions. The core-congruential formulation (CCF) is used to derived the discrete equations in a staged manner. Core equations involving the internal force vector and tangent stiffness matrix are developed at the particle level. A sequence of matrix transformations carries these equations to beam cross-sections and finally to the element nodal degrees of freedom. The choice of finite rotation measure is made in the next-to-last transformation stage, and the choice of over-the-element interpolation in the last one. The tangent stiffness matrix is found to retain symmetry if the rotational vector is chosen to measure finite rotations. An extensive set of numerical examples is presented to test and validate the present element.
Development of Coriolis mass flowmeter with digital drive and signal processing technology.
Hou, Qi-Li; Xu, Ke-Jun; Fang, Min; Liu, Cui; Xiong, Wen-Jun
2013-09-01
Coriolis mass flowmeter (CMF) often suffers from two-phase flowrate which may cause flowtube stalling. To solve this problem, a digital drive method and a digital signal processing method of CMF is studied and implemented in this paper. A positive-negative step signal is used to initiate the flowtube oscillation without knowing the natural frequency of the flowtube. A digital zero-crossing detection method based on Lagrange interpolation is adopted to calculate the frequency and phase difference of the sensor output signals in order to synthesize the digital drive signal. The digital drive approach is implemented by a multiplying digital to analog converter (MDAC) and a direct digital synthesizer (DDS). A digital Coriolis mass flow transmitter is developed with a digital signal processor (DSP) to control the digital drive, and realize the signal processing. Water flow calibrations and gas-liquid two-phase flowrate experiments are conducted to examine the performance of the transmitter. The experimental results show that the transmitter shortens the start-up time and can maintain the oscillation of flowtube in two-phase flowrate condition. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
How to use the Sun-Earth Lagrange points for fundamental physics and navigation
NASA Astrophysics Data System (ADS)
Tartaglia, A.; Lorenzini, E. C.; Lucchesi, D.; Pucacco, G.; Ruggiero, M. L.; Valko, P.
2018-01-01
We illustrate the proposal, nicknamed LAGRANGE, to use spacecraft, located at the Sun-Earth Lagrange points, as a physical reference frame. Performing time of flight measurements of electromagnetic signals traveling on closed paths between the points, we show that it would be possible: (a) to refine gravitational time delay knowledge due both to the Sun and the Earth; (b) to detect the gravito-magnetic frame dragging of the Sun, so deducing information about the interior of the star; (c) to check the possible existence of a galactic gravitomagnetic field, which would imply a revision of the properties of a dark matter halo; (d) to set up a relativistic positioning and navigation system at the scale of the inner solar system. The paper presents estimated values for the relevant quantities and discusses the feasibility of the project analyzing the behavior of the space devices close to the Lagrange points.
Xiao, Qiang; Zeng, Zhigang
2017-10-01
The existed results of Lagrange stability and finite-time synchronization for memristive recurrent neural networks (MRNNs) are scale-free on time evolvement, and some restrictions appear naturally. In this paper, two novel scale-limited comparison principles are established by means of inequality techniques and induction principle on time scales. Then the results concerning Lagrange stability and global finite-time synchronization of MRNNs on time scales are obtained. Scaled-limited Lagrange stability criteria are derived, in detail, via nonsmooth analysis and theory of time scales. Moreover, novel criteria for achieving the global finite-time synchronization are acquired. In addition, the derived method can also be used to study global finite-time stabilization. The proposed results extend or improve the existed ones in the literatures. Two numerical examples are chosen to show the effectiveness of the obtained results.
Analytical Dynamics and Nonrigid Spacecraft Simulation
NASA Technical Reports Server (NTRS)
Likins, P. W.
1974-01-01
Application to the simulation of idealized spacecraft are considered both for multiple-rigid-body models and for models consisting of combination of rigid bodies and elastic bodies, with the elastic bodies being defined either as continua, as finite-element systems, or as a collection of given modal data. Several specific examples are developed in detail by alternative methods of analytical mechanics, and results are compared to a Newton-Euler formulation. The following methods are developed from d'Alembert's principle in vector form: (1) Lagrange's form of d'Alembert's principle for independent generalized coordinates; (2) Lagrange's form of d'Alembert's principle for simply constrained systems; (3) Kane's quasi-coordinate formulation of D'Alembert's principle; (4) Lagrange's equations for independent generalized coordinates; (5) Lagrange's equations for simply constrained systems; (6) Lagrangian quasi-coordinate equations (or the Boltzmann-Hamel equations); (7) Hamilton's equations for simply constrained systems; and (8) Hamilton's equations for independent generalized coordinates.
NASA Astrophysics Data System (ADS)
Gugg, Christoph; Harker, Matthew; O'Leary, Paul
2013-03-01
This paper describes the physical setup and mathematical modelling of a device for the measurement of structural deformations over large scales, e.g., a mining shaft. Image processing techniques are used to determine the deformation by measuring the position of a target relative to a reference laser beam. A particular novelty is the incorporation of electro-active glass; the polymer dispersion liquid crystal shutters enable the simultaneous calibration of any number of consecutive measurement units without manual intervention, i.e., the process is fully automatic. It is necessary to compensate for optical distortion if high accuracy is to be achieved in a compact hardware design where lenses with short focal lengths are used. Wide-angle lenses exhibit significant distortion, which are typically characterized using Zernike polynomials. Radial distortion models assume that the lens is rotationally symmetric; such models are insufficient in the application at hand. This paper presents a new coordinate mapping procedure based on a tensor product of discrete orthogonal polynomials. Both lens distortion and the projection are compensated by a single linear transformation. Once calibrated, to acquire the measurement data, it is necessary to localize a single laser spot in the image. For this purpose, complete interpolation and rectification of the image is not required; hence, we have developed a new hierarchical approach based on a quad-tree subdivision. Cross-validation tests verify the validity, demonstrating that the proposed method accurately models both the optical distortion as well as the projection. The achievable accuracy is e <= +/-0.01 [mm] in a field of view of 150 [mm] x 150 [mm] at a distance of the laser source of 120 [m]. Finally, a Kolmogorov Smirnov test shows that the error distribution in localizing a laser spot is Gaussian. Consequently, due to the linearity of the proposed method, this also applies for the algorithm's output. Therefore, first-order covariance propagation provides an accurate estimate of the measurement uncertainty, which is essential for any measurement device.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagiannis, Georgios, E-mail: georgios.karagiannis@pnnl.gov; Lin, Guang, E-mail: guang.lin@pnnl.gov
2014-02-15
Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, bymore » coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.« less
Zdeněk Kopal: Numerical Analyst
NASA Astrophysics Data System (ADS)
Křížek, M.
2015-07-01
We give a brief overview of Zdeněk Kopal's life, his activities in the Czech Astronomical Society, his collaboration with Vladimír Vand, and his studies at Charles University, Cambridge, Harvard, and MIT. Then we survey Kopal's professional life. He published 26 monographs and 20 conference proceedings. We will concentrate on Kopal's extensive monograph Numerical Analysis (1955, 1961) that is widely accepted to be the first comprehensive textbook on numerical methods. It describes, for instance, methods for polynomial interpolation, numerical differentiation and integration, numerical solution of ordinary differential equations with initial or boundary conditions, and numerical solution of integral and integro-differential equations. Special emphasis will be laid on error analysis. Kopal himself applied numerical methods to celestial mechanics, in particular to the N-body problem. He also used Fourier analysis to investigate light curves of close binaries to discover their properties. This is, in fact, a problem from mathematical analysis.
Dynamic mesh for TCAD modeling with ECORCE
NASA Astrophysics Data System (ADS)
Michez, A.; Boch, J.; Touboul, A.; Saigné, F.
2016-08-01
Mesh generation for TCAD modeling is challenging. Because densities of carriers can change by several orders of magnitude in thin areas, a significant change of the solution can be observed for two very similar meshes. The mesh must be defined at best to minimize this change. To address this issue, a criterion based on polynomial interpolation on adjacent nodes is proposed that adjusts accurately the mesh to the gradients of Degrees of Freedom. Furthermore, a dynamic mesh that follows changes of DF in DC and transient mode is a powerful tool for TCAD users. But, in transient modeling, adding nodes to a mesh induces oscillations in the solution that appears as spikes at the current collected at the contacts. This paper proposes two schemes that solve this problem. Examples show that using these techniques, the dynamic mesh generator of the TCAD tool ECORCE handle semiconductors devices in DC and transient mode.
NASA Technical Reports Server (NTRS)
Leviton, Douglas; Frey, Bradley
2005-01-01
The current refractive optical design of the James Webb Space Telescope (JWST) Near Infrared Camera (NIRCam) uses three infrared materials in its lenses: LiF, BaF2, and ZnSe. In order to provide the instrument s optical designers with accurate, heretofore unavailable data for absolute refractive index based on actual cryogenic measurements, two prismatic samples of each material were measured using the cryogenic, high accuracy, refraction measuring system (CHARMS) at NASA GSFC, densely covering the temperature range from 15 to 320 K and wavelength range from 0.4 to 5.6 microns. Measurement methods are discussed and graphical and tabulated data for absolute refractive index, dispersion, and thermo-optic coefficient for these three materials are presented along with estimates of uncertainty. Coefficients for second order polynomial fits of measured index to temperature are provided for many wavelengths to allow accurate interpolation of index to other wavelengths and temperatures.
NASA Astrophysics Data System (ADS)
Heine, A.; Berger, M.
The classical meaning of motion design is the usage of laws of motion with convenient characteristic values. Whereas the software MOCAD supports a graphical and interactive mode of operation, among others by using an automatic polynomial interpolation. Besides a direct coupling for motion control systems, different file formats for data export are offered. The calculation of plane and spatial cam mechanisms is also based on the data, generated in the motion design module. Drawing on an example of an intermittent cam mechanism with an inside cam profile used as a new drive concept for indexing tables, the influence of motion design on the transmission properties is shown. Another example gives an insight into the calculation and export of envelope curves for cylindrical cam mechanisms. The gained geometry data can be used for generating realistic 3D-models in the CAD-system Pro/ENGINEER, using a special data exchange format.
Global collocation methods for approximation and the solution of partial differential equations
NASA Technical Reports Server (NTRS)
Solomonoff, A.; Turkel, E.
1986-01-01
Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.
A comparative study of upwind and MacCormack schemes for CAA benchmark problems
NASA Technical Reports Server (NTRS)
Viswanathan, K.; Sankar, L. N.
1995-01-01
In this study, upwind schemes and MacCormack schemes are evaluated as to their suitability for aeroacoustic applications. The governing equations are cast in a curvilinear coordinate system and discretized using finite volume concepts. A flux splitting procedure is used for the upwind schemes, where the signals crossing the cell faces are grouped into two categories: signals that bring information from outside into the cell, and signals that leave the cell. These signals may be computed in several ways, with the desired spatial and temporal accuracy achieved by choosing appropriate interpolating polynomials. The classical MacCormack schemes employed here are fourth order accurate in time and space. Results for categories 1, 4, and 6 of the workshop's benchmark problems are presented. Comparisons are also made with the exact solutions, where available. The main conclusions of this study are finally presented.
NASA Technical Reports Server (NTRS)
Rajkumar, T.; Aragon, Cecilia; Bardina, Jorge; Britten, Roy
2002-01-01
A fast, reliable way of predicting aerodynamic coefficients is produced using a neural network optimized by a genetic algorithm. Basic aerodynamic coefficients (e.g. lift, drag, pitching moment) are modelled as functions of angle of attack and Mach number. The neural network is first trained on a relatively rich set of data from wind tunnel tests of numerical simulations to learn an overall model. Most of the aerodynamic parameters can be well-fitted using polynomial functions. A new set of data, which can be relatively sparse, is then supplied to the network to produce a new model consistent with the previous model and the new data. Because the new model interpolates realistically between the sparse test data points, it is suitable for use in piloted simulations. The genetic algorithm is used to choose a neural network architecture to give best results, avoiding over-and under-fitting of the test data.
High temperature alkali corrosion in high velocity gases
NASA Technical Reports Server (NTRS)
Lowell, C. E.; Sidik, S. M.; Deadmore, D. L.
1981-01-01
The effects of potential impurities in coal derived liquids such as Na, K, Mg, Ca and Cl on the accelerated corrosion of IN-100, U-700, IN-792 and Mar-M509 were investigated using a Mach 0.3 burner rig for times to 1000 hours in one hour cycles. These impurities were injected in combination as aqueous solutions into the combustor of the burner rig. The experimental matrix utilized was designed statistically. The extent of corrosion was determined by metal recession. The metal recession data were fitted by linear regression to a polynomial expression which allows both interpolation and extrapolation of the data. As anticipated, corrosion increased rapidly with Na and K, and a marked maximum in the temperature response was noted for many conditions. In contrast, corrosion decreased somewhat as the Ca, Mg and Cl contents increased. Extensive corrosion was observed at concentrations of Na and K as low as 0.1 PPM at long times.
Centrifuge Rotor Models: A Comparison of the Euler-Lagrange and the Bond Graph Modeling Approach
NASA Technical Reports Server (NTRS)
Granda, Jose J.; Ramakrishnan, Jayant; Nguyen, Louis H.
2006-01-01
A viewgraph presentation on centrifuge rotor models with a comparison using Euler-Lagrange and bond graph methods is shown. The topics include: 1) Objectives; 2) MOdeling Approach Comparisons; 3) Model Structures; and 4) Application.
On the commutator of C^{\\infty}} -symmetries and the reduction of Euler-Lagrange equations
NASA Astrophysics Data System (ADS)
Ruiz, A.; Muriel, C.; Olver, P. J.
2018-04-01
A novel procedure to reduce by four the order of Euler-Lagrange equations associated to nth order variational problems involving single variable integrals is presented. In preparation, a new formula for the commutator of two \
Dirac structures in vakonomic mechanics
NASA Astrophysics Data System (ADS)
Jiménez, Fernando; Yoshimura, Hiroaki
2015-08-01
In this paper, we explore dynamics of the nonholonomic system called vakonomic mechanics in the context of Lagrange-Dirac dynamical systems using a Dirac structure and its associated Hamilton-Pontryagin variational principle. We first show the link between vakonomic mechanics and nonholonomic mechanics from the viewpoints of Dirac structures as well as Lagrangian submanifolds. Namely, we clarify that Lagrangian submanifold theory cannot represent nonholonomic mechanics properly, but vakonomic mechanics instead. Second, in order to represent vakonomic mechanics, we employ the space TQ ×V∗, where a vakonomic Lagrangian is defined from a given Lagrangian (possibly degenerate) subject to nonholonomic constraints. Then, we show how implicit vakonomic Euler-Lagrange equations can be formulated by the Hamilton-Pontryagin variational principle for the vakonomic Lagrangian on the extended Pontryagin bundle (TQ ⊕T∗ Q) ×V∗. Associated with this variational principle, we establish a Dirac structure on (TQ ⊕T∗ Q) ×V∗ in order to define an intrinsic vakonomic Lagrange-Dirac system. Furthermore, we also establish another construction for the vakonomic Lagrange-Dirac system using a Dirac structure on T∗ Q ×V∗, where we introduce a vakonomic Dirac differential. Finally, we illustrate our theory of vakonomic Lagrange-Dirac systems by some examples such as the vakonomic skate and the vertical rolling coin.
A discontinuous Galerkin method for the shallow water equations in spherical triangular coordinates
NASA Astrophysics Data System (ADS)
Läuter, Matthias; Giraldo, Francis X.; Handorf, Dörthe; Dethloff, Klaus
2008-12-01
A global model of the atmosphere is presented governed by the shallow water equations and discretized by a Runge-Kutta discontinuous Galerkin method on an unstructured triangular grid. The shallow water equations on the sphere, a two-dimensional surface in R3, are locally represented in terms of spherical triangular coordinates, the appropriate local coordinate mappings on triangles. On every triangular grid element, this leads to a two-dimensional representation of tangential momentum and therefore only two discrete momentum equations. The discontinuous Galerkin method consists of an integral formulation which requires both area (elements) and line (element faces) integrals. Here, we use a Rusanov numerical flux to resolve the discontinuous fluxes at the element faces. A strong stability-preserving third-order Runge-Kutta method is applied for the time discretization. The polynomial space of order k on each curved triangle of the grid is characterized by a Lagrange basis and requires high-order quadature rules for the integration over elements and element faces. For the presented method no mass matrix inversion is necessary, except in a preprocessing step. The validation of the atmospheric model has been done considering standard tests from Williamson et al. [D.L. Williamson, J.B. Drake, J.J. Hack, R. Jakob, P.N. Swarztrauber, A standard test set for numerical approximations to the shallow water equations in spherical geometry, J. Comput. Phys. 102 (1992) 211-224], unsteady analytical solutions of the nonlinear shallow water equations and a barotropic instability caused by an initial perturbation of a jet stream. A convergence rate of O(Δx) was observed in the model experiments. Furthermore, a numerical experiment is presented, for which the third-order time-integration method limits the model error. Thus, the time step Δt is restricted by both the CFL-condition and accuracy demands. Conservation of mass was shown up to machine precision and energy conservation converges for both increasing grid resolution and increasing polynomial order k.
78 FR 43821 - Final Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-22
............ +902 Unincorporated Areas of LaGrange County. Big Long Lake Entire shoreline......... +957 Unincorporated Areas of LaGrange County. Big Turkey Lake Entire shoreline within +932 Unincorporated Areas of... Vertical Datum. + North American Vertical Datum. Depth in feet above ground. [caret] Mean Sea Level...
A Bayesian analysis of trends in ozone sounding data series from 9 Nordic stations
NASA Astrophysics Data System (ADS)
Christiansen, Bo; Jepsen, Nis; Larsen, Niels; Korsholm, Ulrik S.
2016-04-01
Ozone soundings from 9 Nordic stations have been homogenized and interpolated to standard pressure levels. The different stations have very different data coverage; the longest period with data is from the end of the 1980ies to 2013. We apply a model which includes both low-frequency variability in form of a polynomial, an annual cycle with harmonics, the possibility for low-frequency variability in the annual amplitude and phasing, and either white noise or AR1 noise. The fitting of the parameters is performed with a Bayesian approach not only giving the posterior mean values but also credible intervals. We find that all stations agree on an well-defined annual cycle in the free troposphere with a relatively confined maximum in the early summer. Regarding the low-frequency variability we find that Scoresbysund, Ny Aalesund, and Sodankyla show similar structures with a maximum near 2005 followed by a decrease. However, these results are only weakly significant. A significant change in the amplitude of the annual cycle was only found for Ny Aalesund. Here the peak-to-peak amplitude changes from 0.9 to 0.8 mhPa between 1995-2000 and 2007-2012. The results are shown to be robust to the different settings of the model parameters (order of the polynomial, number of harmonics in the annual cycle, type of noise, etc). The results are also shown to be characteristic for all pressure levels in the free troposphere.
High-order conservative finite difference GLM-MHD schemes for cell-centered MHD
NASA Astrophysics Data System (ADS)
Mignone, Andrea; Tzeferacos, Petros; Bodo, Gianluigi
2010-08-01
We present and compare third- as well as fifth-order accurate finite difference schemes for the numerical solution of the compressible ideal MHD equations in multiple spatial dimensions. The selected methods lean on four different reconstruction techniques based on recently improved versions of the weighted essentially non-oscillatory (WENO) schemes, monotonicity preserving (MP) schemes as well as slope-limited polynomial reconstruction. The proposed numerical methods are highly accurate in smooth regions of the flow, avoid loss of accuracy in proximity of smooth extrema and provide sharp non-oscillatory transitions at discontinuities. We suggest a numerical formulation based on a cell-centered approach where all of the primary flow variables are discretized at the zone center. The divergence-free condition is enforced by augmenting the MHD equations with a generalized Lagrange multiplier yielding a mixed hyperbolic/parabolic correction, as in Dedner et al. [J. Comput. Phys. 175 (2002) 645-673]. The resulting family of schemes is robust, cost-effective and straightforward to implement. Compared to previous existing approaches, it completely avoids the CPU intensive workload associated with an elliptic divergence cleaning step and the additional complexities required by staggered mesh algorithms. Extensive numerical testing demonstrate the robustness and reliability of the proposed framework for computations involving both smooth and discontinuous features.
NASA Astrophysics Data System (ADS)
Koshkarbayev, Nurbol; Kanguzhin, Baltabek
2017-09-01
In this paper we study the question on the full description of well-posed restrictions of given maximal differential operator on a tree-graph. Lagrange formula for differential operator on a tree with Kirchhoff conditions at its internal vertices is presented.
ERIC Educational Resources Information Center
Lovell, M.S.
2007-01-01
This paper presents a derivation of all five Lagrange points by methods accessible to sixth-form students, and provides a further opportunity to match Newtonian gravity with centripetal force. The predictive powers of good scientific theories are also discussed with regard to the philosophy of science. Methods for calculating the positions of the…
Bounded state variables and the calculus of variations
NASA Technical Reports Server (NTRS)
Hanafy, L. M.
1972-01-01
An optimal control problem with bounded state variables is transformed into a Lagrange problem by means of differentiable mappings which take some Euclidean space onto the control and state regions. Whereas all such mappings lead to a Lagrange problem, it is shown that only those which are defined as acceptable pairs of transformations are suitable in the sense that solutions to the transformed Lagrange problem will lead to solutions to the original bounded state problem and vice versa. In particular, an acceptable pair of transformations is exhibited for the case when the control and state regions are right parallelepipeds. Finally, a description of the necessary conditions for the bounded state problem which were obtained by this method is given.
NASA Technical Reports Server (NTRS)
Watts, G.
1992-01-01
A programming technique to eliminate computational instability in multibody simulations that use the Lagrange multiplier is presented. The computational instability occurs when the attached bodies drift apart and violate the constraints. The programming technique uses the constraint equation, instead of integration, to determine the coordinates that are not independent. Although the equations of motion are unchanged, a complete derivation of the incorporation of the Lagrange multiplier into the equation of motion for two bodies is presented. A listing of a digital computer program which uses the programming technique to eliminate computational instability is also presented. The computer program simulates a solid rocket booster and parachute connected by a frictionless swivel.
Kamensky, David; Evans, John A; Hsu, Ming-Chen; Bazilevs, Yuri
2017-11-01
This paper discusses a method of stabilizing Lagrange multiplier fields used to couple thin immersed shell structures and surrounding fluids. The method retains essential conservation properties by stabilizing only the portion of the constraint orthogonal to a coarse multiplier space. This stabilization can easily be applied within iterative methods or semi-implicit time integrators that avoid directly solving a saddle point problem for the Lagrange multiplier field. Heart valve simulations demonstrate applicability of the proposed method to 3D unsteady simulations. An appendix sketches the relation between the proposed method and a high-order-accurate approach for simpler model problems.
A Lagrange multiplier and Hopfield-type barrier function method for the traveling salesman problem.
Dang, Chuangyin; Xu, Lei
2002-02-01
A Lagrange multiplier and Hopfield-type barrier function method is proposed for approximating a solution of the traveling salesman problem. The method is derived from applications of Lagrange multipliers and a Hopfield-type barrier function and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the method searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that lower and upper bounds on variables are always satisfied automatically if the step length is a number between zero and one. At each iteration, the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the method converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the method seems more effective and efficient than the softassign algorithm.
Dang, C; Xu, L
2001-03-01
In this paper a globally convergent Lagrange and barrier function iterative algorithm is proposed for approximating a solution of the traveling salesman problem. The algorithm employs an entropy-type barrier function to deal with nonnegativity constraints and Lagrange multipliers to handle linear equality constraints, and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the algorithm searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that the nonnegativity constraints are always satisfied automatically if the step length is a number between zero and one. At each iteration the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the algorithm converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the algorithm seems more effective and efficient than the softassign algorithm.
Three-dimensional flat shell-to-shell coupling: numerical challenges
NASA Astrophysics Data System (ADS)
Guo, Kuo; Haikal, Ghadir
2017-11-01
The node-to-surface formulation is widely used in contact simulations with finite elements because it is relatively easy to implement using different types of element discretizations. This approach, however, has a number of well-known drawbacks, including locking due to over-constraint when this formulation is used as a twopass method. Most studies on the node-to-surface contact formulation, however, have been conducted using solid elements and little has been done to investigate the effectiveness of this approach for beam or shell elements. In this paper we show that locking can also be observed with the node-to-surface contact formulation when applied to plate and flat shell elements even with a singlepass implementation with distinct master/slave designations, which is the standard solution to locking with solid elements. In our study, we use the quadrilateral four node flat shell element for thin (Kirchhoff-Love) plate and thick (Reissner-Mindlin) plate theory, both in their standard forms and with improved formulations such as the linked interpolation [1] and the Discrete Kirchhoff [2] elements for thick and thin plates, respectively. The Lagrange multiplier method is used to enforce the node-to-surface constraints for all elements. The results show clear locking when compared to those obtained using a conforming mesh configuration.
Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P.; Nordsletten, David A.
2014-01-01
The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii–Newton–Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics. PMID:25187672
Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P; Nordsletten, David A
2014-06-01
The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii-Newton-Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics.
Making a georeferenced mosaic of historical map series using constrained polynomial fit
NASA Astrophysics Data System (ADS)
Molnár, G.
2009-04-01
Present day GIS software packages make it possible to handle several hundreds of rasterised map sheets. For proper usage of such datasets we usually have two requirements: First these map sheets should be georeferenced, secondly these georeferenced maps should fit properly together, without overlap and short. Both requirements can be fulfilled easily, if the geodetic background for the map series is accurate, and the projection of the map series is known. In this case the individual map sheets should be georeferenced in the projected coordinate system of the map series. This means every individual map sheets are georeferenced using overprinted coordinate grid or image corner projected coordinates as ground control points (GCPs). If after this georeferencing procedure the map sheets do not fit together (for example because of using different projection for every map sheet, as it is in the case of Third Military Survey) a common projection can be chosen, and all the georeferenced maps should be transformed to this common projection using a map-to-map transformation. If the geodetic background is not so strong, ie. there are distortions inside the map sheets, a polynomial (linear quadratic or cubic) polynomial fit can be used for georeferencing the map sheets. Finding identical surface objects (as GCPs) on the historical map and on a present day cartographic map, let us to determine a transformation between raw image coordinates (x,y) and the projected coordinates (Easting, Northing, E,N). This means, for all the map sheets, several GCPs should be found, (for linear, quadratic of cubic transformations at least 3, 5 or 10 respectively) and every map sheets should be transformed to a present day coordinate system individually using these GCPs. The disadvantage of this method is that, after the transformation, the individual transformed map sheets not necessarily fit together properly any more. To overcome this problem neither the reverse order of procedure helps: if we make the mosaic first (eg. graphically) and we try the polynomial fit of this mosaic afterwards, neither using this can we reduce the error of internal inaccuracy of the map-sheets. We can overcome this problem by calculating the transformation parameters of polynomial fit with constrains (Mikhail, 1976). The constrain is that the common edge of two neighboring map-sheets should be transformed identically, ie. the right edge of the left image and the left edge of the right image should fit together after the transformation. This condition should fulfill for all the internal (not only the vertical, but also for the horizontal) edges of the mosaic. Constrains are expressed as a relationship between parameters: The parameters of the polynomial transformation should fulfill not only the least squares adjustment criteria but also the constrain: the transformed coordinates should be identical on the image edges. (With the example mentioned above, for image points of the rightmost column of the left image the transformed coordinates should be the same a for the image points of the leftmost column of the right image, and these transformed coordinates can depend on the line number image coordinate of the raster point.) The normal equation system can be calculated with Lagrange-multipliers. The resulting set of parameters for all map-sheets should be applied on the transformation of the images. This parameter set can not been directly applied in GIS software for the transformation. The simplest solution applying this parameters is ‘simulating' GCPs for every image, and applying these simulated GCPs for the georeferencing of the individual map sheets. This method is applied on a set of map-sheets of the First military Survey of the Habsburg Empire with acceptable results. Reference: Mikhail, E. M.: Observations and Least Squares. IEP—A Dun-Donnelley Publisher, New York, 1976. 497 pp.
A Person Fit Test for IRT Models for Polytomous Items
ERIC Educational Resources Information Center
Glas, C. A. W.; Dagohoy, Anna Villa T.
2007-01-01
A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability parameters. It is shown that the Lagrange multiplier…
Lagrange multiplier for perishable inventory model considering warehouse capacity planning
NASA Astrophysics Data System (ADS)
Amran, Tiena Gustina; Fatima, Zenny
2017-06-01
This paper presented Lagrange Muktiplier approach for solving perishable raw material inventory planning considering warehouse capacity. A food company faced an issue of managing perishable raw materials and marinades which have limited shelf life. Another constraint to be considered was the capacity of the warehouse. Therefore, an inventory model considering shelf life and raw material warehouse capacity are needed in order to minimize the company's inventory cost. The inventory model implemented in this study was the adapted economic order quantity (EOQ) model which is optimized using Lagrange multiplier. The model and solution approach were applied to solve a case industry in a food manufacturer. The result showed that the total inventory cost decreased 2.42% after applying the proposed approach.
Comparison of Numerical Modeling Methods for Soil Vibration Cutting
NASA Astrophysics Data System (ADS)
Jiang, Jiandong; Zhang, Enguang
2018-01-01
In this paper, we studied the appropriate numerical simulation method for vibration soil cutting. Three numerical simulation methods, commonly used for uniform speed soil cutting, Lagrange, ALE and DEM, are analyzed. Three models of vibration soil cutting simulation model are established by using ls-dyna.The applicability of the three methods to this problem is analyzed in combination with the model mechanism and simulation results. Both the Lagrange method and the DEM method can show the force oscillation of the tool and the large deformation of the soil in the vibration cutting. Lagrange method shows better effect of soil debris breaking. Because of the poor stability of ALE method, it is not suitable to use soil vibration cutting problem.
Efficient modeling of photonic crystals with local Hermite polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boucher, C. R.; Li, Zehao; Albrecht, J. D.
2014-04-21
Developing compact algorithms for accurate electrodynamic calculations with minimal computational cost is an active area of research given the increasing complexity in the design of electromagnetic composite structures such as photonic crystals, metamaterials, optical interconnects, and on-chip routing. We show that electric and magnetic (EM) fields can be calculated using scalar Hermite interpolation polynomials as the numerical basis functions without having to invoke edge-based vector finite elements to suppress spurious solutions or to satisfy boundary conditions. This approach offers several fundamental advantages as evidenced through band structure solutions for periodic systems and through waveguide analysis. Compared with reciprocal space (planemore » wave expansion) methods for periodic systems, advantages are shown in computational costs, the ability to capture spatial complexity in the dielectric distributions, the demonstration of numerical convergence with scaling, and variational eigenfunctions free of numerical artifacts that arise from mixed-order real space basis sets or the inherent aberrations from transforming reciprocal space solutions of finite expansions. The photonic band structure of a simple crystal is used as a benchmark comparison and the ability to capture the effects of spatially complex dielectric distributions is treated using a complex pattern with highly irregular features that would stress spatial transform limits. This general method is applicable to a broad class of physical systems, e.g., to semiconducting lasers which require simultaneous modeling of transitions in quantum wells or dots together with EM cavity calculations, to modeling plasmonic structures in the presence of EM field emissions, and to on-chip propagation within monolithic integrated circuits.« less
Trends and annual cycles in soundings of Arctic tropospheric ozone
NASA Astrophysics Data System (ADS)
Christiansen, Bo; Jepsen, Nis; Kivi, Rigel; Hansen, Georg; Larsen, Niels; Smith Korsholm, Ulrik
2017-08-01
Ozone soundings from nine Nordic stations have been homogenized and interpolated to standard pressure levels. The different stations have very different data coverage; the longest period with data is from the end of the 1980s to 2014. At each pressure level the homogenized ozone time series have been analysed with a model that includes both low-frequency variability in the form of a polynomial, an annual cycle with harmonics, the possibility for low-frequency variability in the annual amplitude and phasing, and either white noise or noise given by a first-order autoregressive process. The fitting of the parameters is performed with a Bayesian approach not only giving the mean values but also confidence intervals. The results show that all stations agree on a well-defined annual cycle in the free troposphere with a relatively confined maximum in the early summer. Regarding the low-frequency variability, it is found that Scoresbysund, Ny Ålesund, Sodankylä, Eureka, and Ørland show similar, significant signals with a maximum near 2005 followed by a decrease. This change is characteristic for all pressure levels in the free troposphere. A significant change in the annual cycle was found for Ny Ålesund, Scoresbysund, and Sodankylä. The changes at these stations are in agreement with the interpretation that the early summer maximum is appearing earlier in the year. The results are shown to be robust to the different settings of the model parameters such as the order of the polynomial, number of harmonics in the annual cycle, and the type of noise.
NASA Astrophysics Data System (ADS)
Crnomarkovic, Nenad; Belosevic, Srdjan; Tomanovic, Ivan; Milicevic, Aleksandar
2017-12-01
The effects of the number of significant figures (NSF) in the interpolation polynomial coefficients (IPCs) of the weighted sum of gray gases model (WSGM) on results of numerical investigations and WSGM optimization were investigated. The investigation was conducted using numerical simulations of the processes inside a pulverized coal-fired furnace. The radiative properties of the gas phase were determined using the simple gray gas model (SG), two-term WSGM (W2), and three-term WSGM (W3). Ten sets of the IPCs with the same NSF were formed for every weighting coefficient in both W2 and W3. The average and maximal relative difference values of the flame temperatures, wall temperatures, and wall heat fluxes were determined. The investigation showed that the results of numerical investigations were affected by the NSF unless it exceeded certain value. The increase in the NSF did not necessarily lead to WSGM optimization. The combination of the NSF (CNSF) was the necessary requirement for WSGM optimization.
Estimation of chirp rates of music-adapted prolate spheroidal atoms using reassignment
NASA Astrophysics Data System (ADS)
Mesz, Bruno; Serrano, Eduardo
2007-09-01
We introduce a modified Matching Pursuit algorithm for estimating frequency and frequency slope of FM-modulated music signals. The use of Matching Pursuit with constant frequency atoms provides coarse estimates which could be improved with chirped atoms, more suited in principle to this kind of signals. Application of the reassignment method is suggested by its good localization properties for chirps. We start considering a family of atoms generated by modulation and scaling of a prolate spheroidal wave function. These functions are concentrated in frequency on intervals of a semitone centered at the frequencies of the well-tempered scale. At each stage of the pursuit, we search the atom most correlated with the signal. We then consider the spectral peaks at each frame of the spectrogram and calculate a modified frequency and frequency slope using the derivatives of the reassignment operators; this is then used to estimate the parameters of a cubic interpolation polynomial that models local pitch fluctuations. We apply the method both to synthetic and music signals.
Numerical methods for coupled fracture problems
NASA Astrophysics Data System (ADS)
Viesca, Robert C.; Garagash, Dmitry I.
2018-04-01
We consider numerical solutions in which the linear elastic response to an opening- or sliding-mode fracture couples with one or more processes. Classic examples of such problems include traction-free cracks leading to stress singularities or cracks with cohesive-zone strength requirements leading to non-singular stress distributions. These classical problems have characteristic square-root asymptotic behavior for stress, relative displacement, or their derivatives. Prior work has shown that such asymptotics lead to a natural quadrature of the singular integrals at roots of Chebyhsev polynomials of the first, second, third, or fourth kind. We show that such quadratures lead to convenient techniques for interpolation, differentiation, and integration, with the potential for spectral accuracy. We further show that these techniques, with slight amendment, may continue to be used for non-classical problems which lack the classical asymptotic behavior. We consider solutions to example problems of both the classical and non-classical variety (e.g., fluid-driven opening-mode fracture and fault shear rupture driven by thermal weakening), with comparisons to analytical solutions or asymptotes, where available.
Sequential experimental design based generalised ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-07-01
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.
Sequential experimental design based generalised ANOVA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less
Fuzzy crane control with sensorless payload deflection feedback for vibration reduction
NASA Astrophysics Data System (ADS)
Smoczek, Jaroslaw
2014-05-01
Different types of cranes are widely used for shifting cargoes in building sites, shipping yards, container terminals and many manufacturing segments where the problem of fast and precise transferring a payload suspended on the ropes with oscillations reduction is frequently important to enhance the productivity, efficiency and safety. The paper presents the fuzzy logic-based robust feedback anti-sway control system which can be applicable either with or without a sensor of sway angle of a payload. The discrete-time control approach is based on the fuzzy interpolation of the controllers and crane dynamic model's parameters with respect to the varying rope length and mass of a payload. The iterative procedure combining a pole placement method and interval analysis of closed-loop characteristic polynomial coefficients is proposed to design the robust control scheme. The sensorless anti-sway control application developed with using PAC system with RX3i controller was verified on the laboratory scaled overhead crane.
Critical Analysis of Dual-Probe Heat-Pulse Technique Applied to Measuring Thermal Diffusivity
NASA Astrophysics Data System (ADS)
Bovesecchi, G.; Coppa, P.; Corasaniti, S.; Potenza, M.
2018-07-01
The paper presents an analysis of the experimental parameters involved in application of the dual-probe heat pulse technique, followed by a critical review of methods for processing thermal response data (e.g., maximum detection and nonlinear least square regression) and the consequent obtainable uncertainty. Glycerol was selected as testing liquid, and its thermal diffusivity was evaluated over the temperature range from - 20 °C to 60 °C. In addition, Monte Carlo simulation was used to assess the uncertainty propagation for maximum detection. It was concluded that maximum detection approach to process thermal response data gives the closest results to the reference data inasmuch nonlinear regression results are affected by major uncertainties due to partial correlation between the evaluated parameters. Besides, the interpolation of temperature data with a polynomial to find the maximum leads to a systematic difference between measured and reference data, as put into evidence by the Monte Carlo simulations; through its correction, this systematic error can be reduced to a negligible value, about 0.8 %.
Coupling between shear and bending in the analysis of beam problems: Planar case
NASA Astrophysics Data System (ADS)
Shabana, Ahmed A.; Patel, Mohil
2018-04-01
The interpretation of invariants, such as curvatures which uniquely define the bending and twist of space curves and surfaces, is fundamental in the formulation of the beam and plate elastic forces. Accurate representations of curve and surface invariants, which enter into the definition of the strain energy equations, is particularly important in the case of large displacement analysis. This paper discusses this important subject in view of the fact that shear and bending are independent modes of deformation and do not have kinematic coupling; this is despite the fact that kinetic coupling may exist. The paper shows, using simple examples, that shear without bending and bending without shear at an arbitrary point and along a certain direction are scenarios that higher-order finite elements (FE) can represent with a degree of accuracy that depends on the order of interpolation and/or mesh size. The FE representation of these two kinematically uncoupled modes of deformation is evaluated in order to examine the effect of the order of the polynomial interpolation on the accuracy of representing these two independent modes. It is also shown in this paper that not all the curvature vectors contribute to bending deformation. In view of the conclusions drawn from the analysis of simple beam problems, the material curvature used in several previous investigations is evaluated both analytically and numerically. The problems associated with the material curvature matrix, obtained using the rotation of the beam cross-section, and the fundamental differences between this material curvature matrix and the Serret-Frenet curvature matrix are discussed.
Large time-step stability of explicit one-dimensional advection schemes
NASA Technical Reports Server (NTRS)
Leonard, B. P.
1993-01-01
There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.
A Fast MoM Solver (GIFFT) for Large Arrays of Microstrip and Cavity-Backed Antennas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fasenfest, B J; Capolino, F; Wilton, D
2005-02-02
A straightforward numerical analysis of large arrays of arbitrary contour (and possibly missing elements) requires large memory storage and long computation times. Several techniques are currently under development to reduce this cost. One such technique is the GIFFT (Green's function interpolation and FFT) method discussed here that belongs to the class of fast solvers for large structures. This method uses a modification of the standard AIM approach [1] that takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basismore » functions, such as the RWG basis. The Green's function is then projected onto a sparse regular grid of separable interpolating polynomials. This grid can then be used in a 2D or 3D FFT to accelerate the matrix-vector product used in an iterative solver [2]. The method has been proven to greatly reduce solve time by speeding up the matrix-vector product computation. The GIFFT approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends GIFFT to layered material Green's functions and multiregion interactions via slots in ground planes. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the GIFFT method is reported in [2]; this contribution is limited to presenting new results for array antennas made of slot-excited patches and cavity-backed patch antennas.« less
General invertible transformation and physical degrees of freedom
NASA Astrophysics Data System (ADS)
Takahashi, Kazufumi; Motohashi, Hayato; Suyama, Teruaki; Kobayashi, Tsutomu
2017-04-01
An invertible field transformation is such that the old field variables correspond one-to-one to the new variables. As such, one may think that two systems that are related by an invertible transformation are physically equivalent. However, if the transformation depends on field derivatives, the equivalence between the two systems is nontrivial due to the appearance of higher derivative terms in the equations of motion. To address this problem, we prove the following theorem on the relation between an invertible transformation and Euler-Lagrange equations: If the field transformation is invertible, then any solution of the original set of Euler-Lagrange equations is mapped to a solution of the new set of Euler-Lagrange equations, and vice versa. We also present applications of the theorem to scalar-tensor theories.
Particle Swarm Optimization of Low-Thrust, Geocentric-to-Halo-Orbit Transfers
NASA Astrophysics Data System (ADS)
Abraham, Andrew J.
Missions to Lagrange points are becoming increasingly popular amongst spacecraft mission planners. Lagrange points are locations in space where the gravity force from two bodies, and the centrifugal force acting on a third body, cancel. To date, all spacecraft that have visited a Lagrange point have done so using high-thrust, chemical propulsion. Due to the increasing availability of low-thrust (high efficiency) propulsive devices, and their increasing capability in terms of fuel efficiency and instantaneous thrust, it has now become possible for a spacecraft to reach a Lagrange point orbit without the aid of chemical propellant. While at any given time there are many paths for a low-thrust trajectory to take, only one is optimal. The traditional approach to spacecraft trajectory optimization utilizes some form of gradient-based algorithm. While these algorithms offer numerous advantages, they also have a few significant shortcomings. The three most significant shortcomings are: (1) the fact that an initial guess solution is required to initialize the algorithm, (2) the radius of convergence can be quite small and can allow the algorithm to become trapped in local minima, and (3) gradient information is not always assessable nor always trustworthy for a given problem. To avoid these problems, this dissertation is focused on optimizing a low-thrust transfer trajectory from a geocentric orbit to an Earth-Moon, L1, Lagrange point orbit using the method of Particle Swarm Optimization (PSO). The PSO method is an evolutionary heuristic that was originally written to model birds swarming to locate hidden food sources. This PSO method will enable the exploration of the invariant stable manifold of the target Lagrange point orbit in an effort to optimize the spacecraft's low-thrust trajectory. Examples of these optimized trajectories are presented and contrasted with those found using traditional, gradient-based approaches. In summary, the results of this dissertation find that the PSO method does, indeed, successfully optimize the low-thrust trajectory transfer problem without the need for initial guessing. Furthermore, a two-degree-of-freedom PSO problem formulation significantly outperformed a one-degree-of-freedom formulation by at least an order of magnitude, in terms of CPU time. Finally, the PSO method is also used to solve a traditional, two-burn, impulsive transfer to a Lagrange point orbit using a hybrid optimization algorithm that incorporates a gradient-based shooting algorithm as a pre-optimizer. Surprisingly, the results of this study show that "fast" transfers outperform "slow" transfers in terms of both Deltav and time of flight.
NASA Astrophysics Data System (ADS)
Choi, S.-J.; Giraldo, F. X.; Kim, J.; Shin, S.
2014-06-01
The non-hydrostatic (NH) compressible Euler equations of dry atmosphere are solved in a simplified two dimensional (2-D) slice framework employing a spectral element method (SEM) for the horizontal discretization and a finite difference method (FDM) for the vertical discretization. The SEM uses high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto-Legendre (GLL) quadrature points. The FDM employs a third-order upwind biased scheme for the vertical flux terms and a centered finite difference scheme for the vertical derivative terms and quadrature. The Euler equations used here are in a flux form based on the hydrostatic pressure vertical coordinate, which are the same as those used in the Weather Research and Forecasting (WRF) model, but a hybrid sigma-pressure vertical coordinate is implemented in this model. We verified the model by conducting widely used standard benchmark tests: the inertia-gravity wave, rising thermal bubble, density current wave, and linear hydrostatic mountain wave. The results from those tests demonstrate that the horizontally spectral element vertically finite difference model is accurate and robust. By using the 2-D slice model, we effectively show that the combined spatial discretization method of the spectral element and finite difference method in the horizontal and vertical directions, respectively, offers a viable method for the development of a NH dynamical core.
Solutions of some problems in applied mathematics using MACSYMA
NASA Technical Reports Server (NTRS)
Punjabi, Alkesh; Lam, Maria
1987-01-01
Various Symbolic Manipulation Programs (SMP) were tested to check the functioning of their commands and suitability under various operating systems. Support systems for SMP were found to be relatively better than the one for MACSYMA. The graphics facilities for MACSYMA do not work as expected under the UNIX operating system. Not all commands for MACSYMA function as described in the manuals. Shape representation is a central issue in computer graphics and computer-aided design. Aside from appearance, there are other application dependent, desirable properties like continuity to certain order, symmetry, axis-independence, and variation-diminishing properties. Several shape representations are studied, which include the Osculatory Method, a Piecewise Cubic Polynomial Method using two different slope estimates, Piecewise Cubic Hermite Form, a method by Harry McLaughlin, and a Piecewise Bezier Method. They are applied to collected physical and chemical data. Relative merits and demerits of these methods are examined. Kinematics of a single link, non-dissipative robot arm is studied using MACSYMA. Lagranian is set-up and Lagrange's equations are derived. From there, Hamiltonian equations of motion are obtained. Equations suggest that bifurcation of solutions can occur, depending upon the value of a single parameter. Using the characteristic function W, the Hamilton-Jacobi equation is derived. It is shown that the H-J equation can be solved in closed form. Analytical solutions to the H-J equation are obtained.
Space Instrument Optimization by Implementing of Generic Three Bodies Circular Restricted Problem
NASA Astrophysics Data System (ADS)
Nejat, Cyrus
2011-01-01
In this study, the main discussion emphasizes on the spacecraft operation with a concentration on stationary points in space. To achieve these objectives, the circular restricted problem was solved for selected approaches. The equations of motion of three body restricted problem was demonstrated to apply in cases other than Lagrange's (1736-1813 A.D.) achievements, by means of the purposed CN (Cyrus Nejat) theorem along with appropriate comments. In addition to five Lagrange, two other points, CN1 and CN2 were found to be in unstable equilibrium points in a very large distance respect to Lagrange points, but stable at infinity. A very interesting simulation of Milky Way Galaxy and Andromeda Galaxy were created to find the Lagrange points, CN points (Cyrus Nejat Points), and CN lines (Cyrus Nejat Lines). The equations of motion were rearranged such a way that the transfer trajectory would be conical, by means of decoupling concept. The main objective was to make a halo orbit transfer about CN lines. The author purposes therefore that all of the corresponding sizing design that they must be developed by optimization techniques would be considered in future approaches. The optimization techniques are sufficient procedures to search for the most ideal response of a system.
Integrated thermal disturbance analysis of optical system of astronomical telescope
NASA Astrophysics Data System (ADS)
Yang, Dehua; Jiang, Zibo; Li, Xinnan
2008-07-01
During operation, astronomical telescope will undergo thermal disturbance, especially more serious in solar telescope, which may cause degradation of image quality. As drives careful thermal load investigation and measure applied to assess its effect on final image quality during design phase. Integrated modeling analysis is boosting the process to find comprehensive optimum design scheme by software simulation. In this paper, we focus on the Finite Element Analysis (FEA) software-ANSYS-for thermal disturbance analysis and the optical design software-ZEMAX-for optical system design. The integrated model based on ANSYS and ZEMAX is briefed in the first from an overview of point. Afterwards, we discuss the establishment of thermal model. Complete power series polynomial with spatial coordinates is introduced to present temperature field analytically. We also borrow linear interpolation technique derived from shape function in finite element theory to interface the thermal model and structural model and further to apply the temperatures onto structural model nodes. Thereby, the thermal loads are transferred with as high fidelity as possible. Data interface and communication between the two softwares are discussed mainly on mirror surfaces and hence on the optical figure representation and transformation. We compare and comment the two different methods, Zernike polynomials and power series expansion, for representing and transforming deformed optical surface to ZEMAX. Additionally, these methods applied to surface with non-circular aperture are discussed. At the end, an optical telescope with parabolic primary mirror of 900 mm in diameter is analyzed to illustrate the above discussion. Finite Element Model with most interested parts of the telescope is generated in ANSYS with necessary structural simplification and equivalence. Thermal analysis is performed and the resulted positions and figures of the optics are to be retrieved and transferred to ZEMAX, and thus final image quality is evaluated with thermal disturbance.
Ye, Jingfei; Gao, Zhishan; Wang, Shuai; Cheng, Jinlong; Wang, Wei; Sun, Wenqing
2014-10-01
Four orthogonal polynomials for reconstructing a wavefront over a square aperture based on the modal method are currently available, namely, the 2D Chebyshev polynomials, 2D Legendre polynomials, Zernike square polynomials and Numerical polynomials. They are all orthogonal over the full unit square domain. 2D Chebyshev polynomials are defined by the product of Chebyshev polynomials in x and y variables, as are 2D Legendre polynomials. Zernike square polynomials are derived by the Gram-Schmidt orthogonalization process, where the integration region across the full unit square is circumscribed outside the unit circle. Numerical polynomials are obtained by numerical calculation. The presented study is to compare these four orthogonal polynomials by theoretical analysis and numerical experiments from the aspects of reconstruction accuracy, remaining errors, and robustness. Results show that the Numerical orthogonal polynomial is superior to the other three polynomials because of its high accuracy and robustness even in the case of a wavefront with incomplete data.
PID position regulation in one-degree-of-freedom Euler-Lagrange systems actuated by a PMSM
NASA Astrophysics Data System (ADS)
Verastegui-Galván, J.; Hernández-Guzmán, V. M.; Orrante-Sakanassi, J.
2018-02-01
This paper is concerned with position regulation in one-degree-of-freedom Euler-Lagrange Systems. We consider that the mechanical subsystem is actuated by a permanent magnet synchronous motor (PMSM). Our proposal consists of a Proportional-Integral-Derivative (PID) controller for the mechanical subsystem and a slight variation of field oriented control for the PMSM. We take into account the motor electric dynamics during the stability analysis. We present, for the first time, a global asymptotic stability proof for such a control scheme without requiring the mechanical subsystem to naturally possess viscous friction. Finally, as a corollary of our main result we prove global asymptotic stability for output feedback PID regulation of one-degree-of-freedom Euler-Lagrange systems when generated torque is considered as the system input, i.e. when the electric dynamics of PMSM's is not taken into account.
Kanarska, Yuliya; Walton, Otis
2015-11-30
Fluid-granular flows are common phenomena in nature and industry. Here, an efficient computational technique based on the distributed Lagrange multiplier method is utilized to simulate complex fluid-granular flows. Each particle is explicitly resolved on an Eulerian grid as a separate domain, using solid volume fractions. The fluid equations are solved through the entire computational domain, however, Lagrange multiplier constrains are applied inside the particle domain such that the fluid within any volume associated with a solid particle moves as an incompressible rigid body. The particle–particle interactions are implemented using explicit force-displacement interactions for frictional inelastic particles similar to the DEMmore » method with some modifications using the volume of an overlapping region as an input to the contact forces. Here, a parallel implementation of the method is based on the SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) library.« less
NASA Astrophysics Data System (ADS)
Li, Mingming; Li, Lin; Li, Qiang; Zou, Zongshu
2018-05-01
A filter-based Euler-Lagrange multiphase flow model is used to study the mixing behavior in a combined blowing steelmaking converter. The Euler-based volume of fluid approach is employed to simulate the top blowing, while the Lagrange-based discrete phase model that embeds the local volume change of rising bubbles for the bottom blowing. A filter-based turbulence method based on the local meshing resolution is proposed aiming to improve the modeling of turbulent eddy viscosities. The model validity is verified through comparison with physical experiments in terms of mixing curves and mixing times. The effects of the bottom gas flow rate on bath flow and mixing behavior are investigated and the inherent reasons for the mixing result are clarified in terms of the characteristics of bottom-blowing plumes, the interaction between plumes and top-blowing jets, and the change of bath flow structure.
Research on interpolation methods in medical image processing.
Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian
2012-04-01
Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.
Areal and Temporal Analysis of Precipitation Patterns In Slovakia Using Spectral Analysis
NASA Astrophysics Data System (ADS)
Pishvaei, M. R.
Harmonic analysis as an objective method of precipitation seasonality studying is ap- plied to the 1901-2000 monthly precipitation averages at five stations in the low-land part of Slovakia with elevation less than 800 m a.s.l. The significant harmonics of long-term precipitation series have been separately computed for eight 30-year peri- ods, which cover the 20th century and some properties and the variations are com- pared to 100-year monthly precipitation averages. The selected results show that the first and the second harmonics pre-dominantly influence on the annual distribution and climatic seasonal regimes of pre-cipitation that contribute to the precipitation am- plitude/pattern with about 20% and 10%, respectively. These indicate annual and half year variations. The rest harmon-ics often have each less than 5% contribution on the Fourier interpolation course. Maximum in yearly precipitation course, which oc- curs approximately at the begin-ning of July, because of phase changing shifts then to the middle of June. Some probable reasons regarding to Fourier components are discussed. In addition, a tem-poral analysis over precipitation time series belonging to the Hurbanovo Observa-tory as the longest observational series on the territory of Slovakia (with 130-year precipitation records) has been individually performed and possible meteorological factors responsible for the observed patterns are suggested. A comparison of annual precipitation course obtained from daily precipitation totals analysis and polynomial trends with Fourier interpolation has been done too. Daily precipitation data in the latest period are compared for some stations in Slovakia as well. Only selected results are pre-sented in the poster.
Analysis of warping deformation modes using higher order ANCF beam element
NASA Astrophysics Data System (ADS)
Orzechowski, Grzegorz; Shabana, Ahmed A.
2016-02-01
Most classical beam theories assume that the beam cross section remains a rigid surface under an arbitrary loading condition. However, in the absolute nodal coordinate formulation (ANCF) continuum-based beams, this assumption can be relaxed allowing for capturing deformation modes that couple the cross-section deformation and beam bending, torsion, and/or elongation. The deformation modes captured by ANCF finite elements depend on the interpolating polynomials used. The most widely used spatial ANCF beam element employs linear approximation in the transverse direction, thereby restricting the cross section deformation and leading to locking problems. The objective of this investigation is to examine the behavior of a higher order ANCF beam element that includes quadratic interpolation in the transverse directions. This higher order element allows capturing warping and non-uniform stretching distribution. Furthermore, this higher order element allows for increasing the degree of continuity at the element interface. It is shown in this paper that the higher order ANCF beam element can be used effectively to capture warping and eliminate Poisson locking that characterizes lower order ANCF finite elements. It is also shown that increasing the degree of continuity requires a special attention in order to have acceptable results. Because higher order elements can be more computationally expensive than the lower order elements, the use of reduced integration for evaluating the stress forces and the use of explicit and implicit numerical integrations to solve the nonlinear dynamic equations of motion are investigated in this paper. It is shown that the use of some of these integration methods can be very effective in reducing the CPU time without adversely affecting the solution accuracy.
Poly-Frobenius-Euler polynomials
NASA Astrophysics Data System (ADS)
Kurt, Burak
2017-07-01
Hamahata [3] defined poly-Euler polynomials and the generalized poly-Euler polynomials. He proved some relations and closed formulas for the poly-Euler polynomials. By this motivation, we define poly-Frobenius-Euler polynomials. We give some relations for this polynomials. Also, we prove the relationships between poly-Frobenius-Euler polynomials and Stirling numbers of the second kind.
Stochastic Modeling of Flow-Structure Interactions using Generalized Polynomial Chaos
2001-09-11
Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc...scheme, which is represented as a tree structure in figure 1 (following [24]), classifies the hypergeometric orthogonal polynomials and indicates the...2F0(1) 2F0(0) Figure 1: The Askey scheme of orthogonal polynomials The orthogonal polynomials associated with the generalized polynomial chaos,
Enhancements to the SHARP Build System and NEK5000 Coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alex; Bennett, Andrew R.; Billings, Jay Jay
The SHARP project for the Department of Energy's Nuclear Energy Advanced Modeling and Simulation (NEAMS) program provides a multiphysics framework for coupled simulations of advanced nuclear reactor designs. It provides an overall coupling environment that utilizes custom interfaces to couple existing physics codes through a common spatial decomposition and unique solution transfer component. As of this writing, SHARP couples neutronics, thermal hydraulics, and structural mechanics using PROTEUS, Nek5000, and Diablo respectively. This report details two primary SHARP improvements regarding the Nek5000 and Diablo individual physics codes: (1) an improved Nek5000 coupling interface that lets SHARP achieve a vast increase inmore » overall solution accuracy by manipulating the structure of the internal Nek5000 spatial mesh, and (2) the capability to seamlessly couple structural mechanics calculations into the framework through improvements to the SHARP build system. The Nek5000 coupling interface now uses a barycentric Lagrange interpolation method that takes the vertex-based power and density computed from the PROTEUS neutronics solver and maps it to the user-specified, general-order Nek5000 spectral element mesh. Before this work, SHARP handled this vertex-based solution transfer in an averaging-based manner. SHARP users can now achieve higher levels of accuracy by specifying any arbitrary Nek5000 spectral mesh order. This improvement takes the average percentage error between the PROTEUS power solution and the Nek5000 interpolated result down drastically from over 23 % to just above 2 %, and maintains the correct power profile. We have integrated Diablo into the SHARP build system to facilitate the future coupling of structural mechanics calculations into SHARP. Previously, simulations involving Diablo were done in an iterative manner, requiring a large amount manual work, and left only as a task for advanced users. This report will detail a new Diablo build system that was implemented using GNU Autotools, mirroring much of the current SHARP build system, and easing the use of structural mechanics calculations for end-users of the SHARP multiphysics framework. It lets users easily build and use Diablo as a stand-alone simulation, as well as fully couple with the other SHARP physics modules. The top-level SHARP build system was modified to allow Diablo to hook in directly. New dependency handlers were implemented to let SHARP users easily build the framework with these new simulation capabilities. The remainder of this report will describe this work in full, with a detailed discussion of the overall design philosophy of SHARP, the new solution interpolation method introduced, and the Diablo integration work. We will conclude with a discussion of possible future SHARP improvements that will serve to increase solution accuracy and framework capability.« less
NASA Astrophysics Data System (ADS)
Rahimi Dalkhani, Amin; Javaherian, Abdolrahim; Mahdavi Basir, Hadi
2018-04-01
Wave propagation modeling as a vital tool in seismology can be done via several different numerical methods among them are finite-difference, finite-element, and spectral-element methods (FDM, FEM and SEM). Some advanced applications in seismic exploration benefit the frequency domain modeling. Regarding flexibility in complex geological models and dealing with the free surface boundary condition, we studied the frequency domain acoustic wave equation using FEM and SEM. The results demonstrated that the frequency domain FEM and SEM have a good accuracy and numerical efficiency with the second order interpolation polynomials. Furthermore, we developed the second order Clayton and Engquist absorbing boundary condition (CE-ABC2) and compared it with the perfectly matched layer (PML) for the frequency domain FEM and SEM. In spite of PML method, CE-ABC2 does not add any additional computational cost to the modeling except assembling boundary matrices. As a result, considering CE-ABC2 is more efficient than PML for the frequency domain acoustic wave propagation modeling especially when computational cost is high and high-level absorbing performance is unnecessary.
NASA Technical Reports Server (NTRS)
Noor, A. K.; Peters, J. M.
1981-01-01
Simple mixed models are developed for use in the geometrically nonlinear analysis of deep arches. A total Lagrangian description of the arch deformation is used, the analytical formulation being based on a form of the nonlinear deep arch theory with the effects of transverse shear deformation included. The fundamental unknowns comprise the six internal forces and generalized displacements of the arch, and the element characteristic arrays are obtained by using Hellinger-Reissner mixed variational principle. The polynomial interpolation functions employed in approximating the forces are one degree lower than those used in approximating the displacements, and the forces are discontinuous at the interelement boundaries. Attention is given to the equivalence between the mixed models developed herein and displacement models based on reduced integration of both the transverse shear and extensional energy terms. The advantages of mixed models over equivalent displacement models are summarized. Numerical results are presented to demonstrate the high accuracy and effectiveness of the mixed models developed and to permit a comparison of their performance with that of other mixed models reported in the literature.
Fitting Nonlinear Curves by use of Optimization Techniques
NASA Technical Reports Server (NTRS)
Hill, Scott A.
2005-01-01
MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.
An Immersed Boundary-Lattice Boltzmann Method for Simulating Particulate Flows
NASA Astrophysics Data System (ADS)
Zhang, Baili; Cheng, Ming; Lou, Jing
2013-11-01
A two-dimensional momentum exchange-based immersed boundary-lattice Boltzmann method developed by X.D. Niu et al. (2006) has been extended in three-dimensions for solving fluid-particles interaction problems. This method combines the most desirable features of the lattice Boltzmann method and the immersed boundary method by using a regular Eulerian mesh for the flow domain and a Lagrangian mesh for the moving particles in the flow field. The non-slip boundary conditions for the fluid and the particles are enforced by adding a force density term into the lattice Boltzmann equation, and the forcing term is simply calculated by the momentum exchange of the boundary particle density distribution functions, which are interpolated by the Lagrangian polynomials from the underlying Eulerian mesh. This method preserves the advantages of lattice Boltzmann method in tracking a group of particles and, at the same time, provides an alternative approach to treat solid-fluid boundary conditions. Numerical validations show that the present method is very accurate and efficient. The present method will be further developed to simulate more complex problems with particle deformation, particle-bubble and particle-droplet interactions.
Symmetries of the Space of Linear Symplectic Connections
NASA Astrophysics Data System (ADS)
Fox, Daniel J. F.
2017-01-01
There is constructed a family of Lie algebras that act in a Hamiltonian way on the symplectic affine space of linear symplectic connections on a symplectic manifold. The associated equivariant moment map is a formal sum of the Cahen-Gutt moment map, the Ricci tensor, and a translational term. The critical points of a functional constructed from it interpolate between the equations for preferred symplectic connections and the equations for critical symplectic connections. The commutative algebra of formal sums of symmetric tensors on a symplectic manifold carries a pair of compatible Poisson structures, one induced from the canonical Poisson bracket on the space of functions on the cotangent bundle polynomial in the fibers, and the other induced from the algebraic fiberwise Schouten bracket on the symmetric algebra of each fiber of the cotangent bundle. These structures are shown to be compatible, and the required Lie algebras are constructed as central extensions of their! linear combinations restricted to formal sums of symmetric tensors whose first order term is a multiple of the differential of its zeroth order term.
NASA Technical Reports Server (NTRS)
Pratt, D. T.; Radhakrishnan, K.
1986-01-01
The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.
Multivariate geostatistical application for climate characterization of Minas Gerais State, Brazil
NASA Astrophysics Data System (ADS)
de Carvalho, Luiz G.; de Carvalho Alves, Marcelo; de Oliveira, Marcelo S.; Vianello, Rubens L.; Sediyama, Gilberto C.; de Carvalho, Luis M. T.
2010-11-01
The objective of the present study was to assess for Minas Gerais the cokriging methodology, in order to characterize the spatial variability of Thornthwaite annual moisture index, annual rainfall, and average annual air temperature, based on geographical coordinates, altitude, latitude, and longitude. The climatic element data referred to 39 INMET climatic stations located in the state of Minas Gerais and in nearby areas and the covariables altitude, latitude, and longitude to the SRTM digital elevation model. Spatial dependence of data was observed through spherical cross semivariograms and cross covariance models. Box-Cox and log transformation were applied to the positive variables. In these situations, kriged predictions were back-transformed and returned to the same scale as the original data. Trend was removed using global polynomial interpolation. Universal simple cokriging best characterized the climate variables without tendentiousness and with high accuracy and precision when compared to simple cokriging. Considering the satisfactory implementation of universal simple cokriging for the monitoring of climatic elements, this methodology presents enormous potential for the characterization of climate change impact in Minas Gerais state.
Estimation of option-implied risk-neutral into real-world density by using calibration function
NASA Astrophysics Data System (ADS)
Bahaludin, Hafizah; Abdullah, Mimi Hafizah
2017-04-01
Option prices contain crucial information that can be used as a reflection of future development of an underlying assets' price. The main objective of this study is to extract the risk-neutral density (RND) and the risk-world density (RWD) of option prices. A volatility function technique is applied by using a fourth order polynomial interpolation to obtain the RNDs. Then, a calibration function is used to convert the RNDs into RWDs. There are two types of calibration function which are parametric and non-parametric calibrations. The density is extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity from January 2009 until December 2015. The performance of RNDs and RWDs extracted are evaluated by using a density forecasting test. This study found out that the RWDs obtain can provide an accurate information regarding the price of the underlying asset in future compared to that of the RNDs. In addition, empirical evidence suggests that RWDs from a non-parametric calibration has a better accuracy than other densities.
Error detection and data smoothing based on local procedures
NASA Technical Reports Server (NTRS)
Guerra, V. M.
1974-01-01
An algorithm is presented which is able to locate isolated bad points and correct them without contaminating the rest of the good data. This work has been greatly influenced and motivated by what is currently done in the manual loft. It is not within the scope of this work to handle small random errors characteristic of a noisy system, and it is therefore assumed that the bad points are isolated and relatively few when compared with the total number of points. Motivated by the desire to imitate the loftsman a visual experiment was conducted to determine what is considered smooth data. This criterion is used to determine how much the data should be smoothed and to prove that this method produces such data. The method utimately converges to a set of points that lies on the polynomial that interpolates the first and last points; however convergence to such a set is definitely not the purpose of our algorithm. The proof of convergence is necessary to demonstrate that oscillation does not take place and that in a finite number of steps the method produces a set as smooth as desired.
Müller, Dirk K; Pampel, André; Möller, Harald E
2013-05-01
Quantification of magnetization-transfer (MT) experiments are typically based on the assumption of the binary spin-bath model. This model allows for the extraction of up to six parameters (relative pool sizes, relaxation times, and exchange rate constants) for the characterization of macromolecules, which are coupled via exchange processes to the water in tissues. Here, an approach is presented for estimating MT parameters acquired with arbitrary saturation schemes and imaging pulse sequences. It uses matrix algebra to solve the Bloch-McConnell equations without unwarranted simplifications, such as assuming steady-state conditions for pulsed saturation schemes or neglecting imaging pulses. The algorithm achieves sufficient efficiency for voxel-by-voxel MT parameter estimations by using a polynomial interpolation technique. Simulations, as well as experiments in agar gels with continuous-wave and pulsed MT preparation, were performed for validation and for assessing approximations in previous modeling approaches. In vivo experiments in the normal human brain yielded results that were consistent with published data. Copyright © 2013 Elsevier Inc. All rights reserved.
Hernandez, Andrew M; Boone, John M
2014-04-01
Monte Carlo methods were used to generate lightly filtered high resolution x-ray spectra spanning from 20 kV to 640 kV. X-ray spectra were simulated for a conventional tungsten anode. The Monte Carlo N-Particle eXtended radiation transport code (MCNPX 2.6.0) was used to produce 35 spectra over the tube potential range from 20 kV to 640 kV, and cubic spline interpolation procedures were used to create piecewise polynomials characterizing the photon fluence per energy bin as a function of x-ray tube potential. Using these basis spectra and the cubic spline interpolation, 621 spectra were generated at 1 kV intervals from 20 to 640 kV. The tungsten anode spectral model using interpolating cubic splines (TASMICS) produces minimally filtered (0.8 mm Be) x-ray spectra with 1 keV energy resolution. The TASMICS spectra were compared mathematically with other, previously reported spectra. Using pairedt-test analyses, no statistically significant difference (i.e., p > 0.05) was observed between compared spectra over energy bins above 1% of peak bremsstrahlung fluence. For all energy bins, the correlation of determination (R(2)) demonstrated good correlation for all spectral comparisons. The mean overall difference (MOD) and mean absolute difference (MAD) were computed over energy bins (above 1% of peak bremsstrahlung fluence) and over all the kV permutations compared. MOD and MAD comparisons with previously reported spectra were 2.7% and 9.7%, respectively (TASMIP), 0.1% and 12.0%, respectively [R. Birch and M. Marshall, "Computation of bremsstrahlung x-ray spectra and comparison with spectra measured with a Ge(Li) detector," Phys. Med. Biol. 24, 505-517 (1979)], 0.4% and 8.1%, respectively (Poludniowski), and 0.4% and 8.1%, respectively (AAPM TG 195). The effective energy of TASMICS spectra with 2.5 mm of added Al filtration ranged from 17 keV (at 20 kV) to 138 keV (at 640 kV); with 0.2 mm of added Cu filtration the effective energy was 9 keV at 20 kV and 169 keV at 640 kV. Ranging from 20 kV to 640 kV, 621 x-ray spectra were produced and are available at 1 kV tube potential intervals. The spectra are tabulated at 1 keV intervals. TASMICS spectra were shown to be largely equivalent to published spectral models and are available in spreadsheet format for interested users by emailing the corresponding author (JMB). © 2014 American Association of Physicists in Medicine.
Hernandez, Andrew M.; Boone, John M.
2014-01-01
Purpose: Monte Carlo methods were used to generate lightly filtered high resolution x-ray spectra spanning from 20 kV to 640 kV. Methods: X-ray spectra were simulated for a conventional tungsten anode. The Monte Carlo N-Particle eXtended radiation transport code (MCNPX 2.6.0) was used to produce 35 spectra over the tube potential range from 20 kV to 640 kV, and cubic spline interpolation procedures were used to create piecewise polynomials characterizing the photon fluence per energy bin as a function of x-ray tube potential. Using these basis spectra and the cubic spline interpolation, 621 spectra were generated at 1 kV intervals from 20 to 640 kV. The tungsten anode spectral model using interpolating cubic splines (TASMICS) produces minimally filtered (0.8 mm Be) x-ray spectra with 1 keV energy resolution. The TASMICS spectra were compared mathematically with other, previously reported spectra. Results: Using paired t-test analyses, no statistically significant difference (i.e., p > 0.05) was observed between compared spectra over energy bins above 1% of peak bremsstrahlung fluence. For all energy bins, the correlation of determination (R2) demonstrated good correlation for all spectral comparisons. The mean overall difference (MOD) and mean absolute difference (MAD) were computed over energy bins (above 1% of peak bremsstrahlung fluence) and over all the kV permutations compared. MOD and MAD comparisons with previously reported spectra were 2.7% and 9.7%, respectively (TASMIP), 0.1% and 12.0%, respectively [R. Birch and M. Marshall, “Computation of bremsstrahlung x-ray spectra and comparison with spectra measured with a Ge(Li) detector,” Phys. Med. Biol. 24, 505–517 (1979)], 0.4% and 8.1%, respectively (Poludniowski), and 0.4% and 8.1%, respectively (AAPM TG 195). The effective energy of TASMICS spectra with 2.5 mm of added Al filtration ranged from 17 keV (at 20 kV) to 138 keV (at 640 kV); with 0.2 mm of added Cu filtration the effective energy was 9 keV at 20 kV and 169 keV at 640 kV. Conclusions: Ranging from 20 kV to 640 kV, 621 x-ray spectra were produced and are available at 1 kV tube potential intervals. The spectra are tabulated at 1 keV intervals. TASMICS spectra were shown to be largely equivalent to published spectral models and are available in spreadsheet format for interested users by emailing the corresponding author (JMB). PMID:24694149
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, Andrew M.; Boone, John M., E-mail: john.boone@ucdmc.ucdavis.edu
Purpose: Monte Carlo methods were used to generate lightly filtered high resolution x-ray spectra spanning from 20 kV to 640 kV. Methods: X-ray spectra were simulated for a conventional tungsten anode. The Monte Carlo N-Particle eXtended radiation transport code (MCNPX 2.6.0) was used to produce 35 spectra over the tube potential range from 20 kV to 640 kV, and cubic spline interpolation procedures were used to create piecewise polynomials characterizing the photon fluence per energy bin as a function of x-ray tube potential. Using these basis spectra and the cubic spline interpolation, 621 spectra were generated at 1 kV intervalsmore » from 20 to 640 kV. The tungsten anode spectral model using interpolating cubic splines (TASMICS) produces minimally filtered (0.8 mm Be) x-ray spectra with 1 keV energy resolution. The TASMICS spectra were compared mathematically with other, previously reported spectra. Results: Using pairedt-test analyses, no statistically significant difference (i.e., p > 0.05) was observed between compared spectra over energy bins above 1% of peak bremsstrahlung fluence. For all energy bins, the correlation of determination (R{sup 2}) demonstrated good correlation for all spectral comparisons. The mean overall difference (MOD) and mean absolute difference (MAD) were computed over energy bins (above 1% of peak bremsstrahlung fluence) and over all the kV permutations compared. MOD and MAD comparisons with previously reported spectra were 2.7% and 9.7%, respectively (TASMIP), 0.1% and 12.0%, respectively [R. Birch and M. Marshall, “Computation of bremsstrahlung x-ray spectra and comparison with spectra measured with a Ge(Li) detector,” Phys. Med. Biol. 24, 505–517 (1979)], 0.4% and 8.1%, respectively (Poludniowski), and 0.4% and 8.1%, respectively (AAPM TG 195). The effective energy of TASMICS spectra with 2.5 mm of added Al filtration ranged from 17 keV (at 20 kV) to 138 keV (at 640 kV); with 0.2 mm of added Cu filtration the effective energy was 9 keV at 20 kV and 169 keV at 640 kV. Conclusions: Ranging from 20 kV to 640 kV, 621 x-ray spectra were produced and are available at 1 kV tube potential intervals. The spectra are tabulated at 1 keV intervals. TASMICS spectra were shown to be largely equivalent to published spectral models and are available in spreadsheet format for interested users by emailing the corresponding author (JMB)« less
1982-04-01
S. (1979), "Conflict Among Criteria for Testing Hypothesis: Extension and Comments," Econometrica, 47, 203-207 Breusch , T. S. and Pagan , A. R. (1980...Savin, N. E. (1977), "Conflict Among Criteria for Testing Hypothesis in the Multivariate Linear Regression Model," Econometrica, 45, 1263-1278 Breusch , T...VNCLASSIFIED RAND//-6756NL U l~ I- THE RELATION AMONG THE LIKELIHOOD RATIO-, WALD-, AND LAGRANGE MULTIPLIER TESTS AND THEIR APPLICABILITY TO SMALL SAMPLES
Subscale Fast Cookoff Testing and Modeling for the Hazard Assessment of Large Rocket Motors
2001-03-01
41 LIST OF TABLES Table 1 Heats of Vaporization Parameter for Two-liner Phase Transformation - Complete Liner Sublimation and/or Combined Liner...One-dimensional 2-D Two-dimensional ALE3D Arbitrary-Lagrange-Eulerian (3-D) Computer Code ALEGRA 3-D Arbitrary-Lagrange-Eulerian Computer Code for...case-liner bond areas and in the grain inner bore to explore the pre-ignition and ignition phases , as well as burning evolution in rocket motor fast
An Exposition on the Nonlinear Kinematics of Shells, Including Transverse Shearing Deformations
NASA Technical Reports Server (NTRS)
Nemeth, Michael P.
2013-01-01
An in-depth exposition on the nonlinear deformations of shells with "small" initial geometric imperfections, is presented without the use of tensors. First, the mathematical descriptions of an undeformed-shell reference surface, and its deformed image, are given in general nonorthogonal coordinates. The two-dimensional Green-Lagrange strains of the reference surface derived and simplified for the case of "small" strains. Linearized reference-surface strains, rotations, curvatures, and torsions are then derived and used to obtain the "small" Green-Lagrange strains in terms of linear deformation measures. Next, the geometry of the deformed shell is described mathematically and the "small" three-dimensional Green-Lagrange strains are given. The deformations of the shell and its reference surface are related by introducing a kinematic hypothesis that includes transverse shearing deformations and contains the classical Love-Kirchhoff kinematic hypothesis as a proper, explicit subset. Lastly, summaries of the essential equations are given for general nonorthogonal and orthogonal coordinates, and the basis for further simplification of the equations is discussed.
Augmented Lagrange Hopfield network for solving economic dispatch problem in competitive environment
NASA Astrophysics Data System (ADS)
Vo, Dieu Ngoc; Ongsakul, Weerakorn; Nguyen, Khai Phuc
2012-11-01
This paper proposes an augmented Lagrange Hopfield network (ALHN) for solving economic dispatch (ED) problem in the competitive environment. The proposed ALHN is a continuous Hopfield network with its energy function based on augmented Lagrange function for efficiently dealing with constrained optimization problems. The ALHN method can overcome the drawbacks of the conventional Hopfield network such as local optimum, long computational time, and linear constraints. The proposed method is used for solving the ED problem with two revenue models of revenue based on payment for power delivered and payment for reserve allocated. The proposed ALHN has been tested on two systems of 3 units and 10 units for the two considered revenue models. The obtained results from the proposed methods are compared to those from differential evolution (DE) and particle swarm optimization (PSO) methods. The result comparison has indicated that the proposed method is very efficient for solving the problem. Therefore, the proposed ALHN could be a favorable tool for ED problem in the competitive environment.
Holonomicity analysis of electromechanical systems
NASA Astrophysics Data System (ADS)
Wcislik, Miroslaw; Suchenia, Karol
2017-12-01
Electromechanical systems are described using state variables that contain electrical and mechanical components. The equations of motion, both electrical and mechanical, describe the relationships between these components. These equations are obtained using Lagrange functions. On the basis of the function and Lagrange - d'Alembert equation the methodology of obtaining equations for electromechanical systems was presented, together with a discussion of the nonholonomicity of these systems. The electromechanical system in the form of a single-phase reluctance motor was used to verify the presented method. Mechanical system was built as a system, which can oscillate as the element of physical pendulum. On the base of the pendulum oscillation, parameters of the electromechanical system were defined. The identification of the motor electric parameters as a function of the rotation angle was carried out. In this paper the characteristics and motion equations parameters of the motor are presented. The parameters of the motion equations obtained from the experiment and from the second order Lagrange equations are compared.
Size effects in non-linear heat conduction with flux-limited behaviors
NASA Astrophysics Data System (ADS)
Li, Shu-Nan; Cao, Bing-Yang
2017-11-01
Size effects are discussed for several non-linear heat conduction models with flux-limited behaviors, including the phonon hydrodynamic, Lagrange multiplier, hierarchy moment, nonlinear phonon hydrodynamic, tempered diffusion, thermon gas and generalized nonlinear models. For the phonon hydrodynamic, Lagrange multiplier and tempered diffusion models, heat flux will not exist in problems with sufficiently small scale. The existence of heat flux needs the sizes of heat conduction larger than their corresponding critical sizes, which are determined by the physical properties and boundary temperatures. The critical sizes can be regarded as the theoretical limits of the applicable ranges for these non-linear heat conduction models with flux-limited behaviors. For sufficiently small scale heat conduction, the phonon hydrodynamic and Lagrange multiplier models can also predict the theoretical possibility of violating the second law and multiplicity. Comparisons are also made between these non-Fourier models and non-linear Fourier heat conduction in the type of fast diffusion, which can also predict flux-limited behaviors.
A macroscopic plasma Lagrangian and its application to wave interactions and resonances
NASA Technical Reports Server (NTRS)
Peng, Y. K. M.
1974-01-01
The derivation of a macroscopic plasma Lagrangian is considered, along with its application to the description of nonlinear three-wave interaction in a homogeneous plasma and linear resonance oscillations in a inhomogeneous plasma. One approach to obtain the Lagrangian is via the inverse problem of the calculus of variations for arbitrary first and second order quasilinear partial differential systems. Necessary and sufficient conditions for the given equations to be Euler-Lagrange equations of a Lagrangian are obtained. These conditions are then used to determine the transformations that convert some classes of non-Euler-Lagrange equations to Euler-Lagrange equation form. The Lagrangians for a linear resistive transmission line and a linear warm collisional plasma are derived as examples. Using energy considerations, the correct macroscopic plasma Lagrangian is shown to differ from the velocity-integrated low Lagrangian by a macroscopic potential energy that equals twice the particle thermal kinetic energy plus the energy lost by heat conduction.
NASA Astrophysics Data System (ADS)
Bogunović, Igor; Pereira, Paulo; Šeput, Miranda
2016-04-01
Soil organic carbon (SOC), pH, available phosphorus (P), and potassium (K) are some of the most important factors to soil fertility. These soil parameters are highly variable in space and time, with implications to crop production. The aim of this work is study the spatial variability of SOC, pH, P and K in an organic farm located in river Rasa valley (Croatia). A regular grid (100 x 100 m) was designed and 182 samples were collected on Silty Clay Loam soil. P, K and SOC showed moderate heterogeneity with coefficient of variation (CV) of 21.6%, 32.8% and 51.9%, respectively. Soil pH record low spatial variability with CV of 1.5%. Soil pH, P and SOC did not follow normal distribution. Only after a Box-Cox transformation, data respected the normality requirements. Directional exponential models were the best fitted and used to describe spatial autocorrelation. Soil pH, P and SOC showed strong spatial dependence with nugget to sill ratio with 13.78%, 0.00% and 20.29%, respectively. Only K recorded moderate spatial dependence. Semivariogram ranges indicate that future sampling interval could be 150 - 200 m in order to reduce sampling costs. Fourteen different interpolation models for mapping soil properties were tested. The method with lowest Root Mean Square Error was the most appropriated to map the variable. The results showed that radial basis function models (Spline with Tension and Completely Regularized Spline) for P and K were the best predictors, while Thin Plate Spline and inverse distance weighting models were the least accurate. The best interpolator for pH and SOC was the local polynomial with the power of 1, while the least accurate were Thin Plate Spline. According to soil nutrient maps investigated area record very rich supply with K while P supply was insufficient on largest part of area. Soil pH maps showed mostly neutral reaction while individual parts of alkaline soil indicate the possibility of penetration of seawater and salt accumulation in the soil profile. Future research should focus on spatial patterns on soil pH, electrical conductivity and sodium adsorption ratio. Keywords: geostatistics, semivariogram, interpolation models, soil chemical properties
Modeling Uncertainty in Steady State Diffusion Problems via Generalized Polynomial Chaos
2002-07-25
Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc., AMS... orthogonal polynomial functionals from the Askey scheme, as a generalization of the original polynomial chaos idea of Wiener (1938). A Galerkin projection...1) by generalized polynomial chaos expansion, where the uncertainties can be introduced through κ, f , or g, or some combinations. It is worth
Orthonormal aberration polynomials for anamorphic optical imaging systems with circular pupils.
Mahajan, Virendra N
2012-06-20
In a recent paper, we considered the classical aberrations of an anamorphic optical imaging system with a rectangular pupil, representing the terms of a power series expansion of its aberration function. These aberrations are inherently separable in the Cartesian coordinates (x,y) of a point on the pupil. Accordingly, there is x-defocus and x-coma, y-defocus and y-coma, and so on. We showed that the aberration polynomials orthonormal over the pupil and representing balanced aberrations for such a system are represented by the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point; for example, L(l)(x)L(m)(y), where l and m are positive integers (including zero) and L(l)(x), for example, represents an orthonormal Legendre polynomial of degree l in x. The compound two-dimensional (2D) Legendre polynomials, like the classical aberrations, are thus also inherently separable in the Cartesian coordinates of the pupil point. Moreover, for every orthonormal polynomial L(l)(x)L(m)(y), there is a corresponding orthonormal polynomial L(l)(y)L(m)(x) obtained by interchanging x and y. These polynomials are different from the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil. In this paper, we show that the orthonormal aberration polynomials for an anamorphic system with a circular pupil, obtained by the Gram-Schmidt orthogonalization of the 2D Legendre polynomials, are not separable in the two coordinates. Moreover, for a given polynomial in x and y, there is no corresponding polynomial obtained by interchanging x and y. For example, there are polynomials representing x-defocus, balanced x-coma, and balanced x-spherical aberration, but no corresponding y-aberration polynomials. The missing y-aberration terms are contained in other polynomials. We emphasize that the Zernike circle polynomials, although orthogonal over a circular pupil, are not suitable for an anamorphic system as they do not represent balanced aberrations for such a system.
Spatiotemporal Interpolation Methods for Solar Event Trajectories
NASA Astrophysics Data System (ADS)
Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe
2018-05-01
This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.
A bivariate rational interpolation with a bi-quadratic denominator
NASA Astrophysics Data System (ADS)
Duan, Qi; Zhang, Huanling; Liu, Aikui; Li, Huaigu
2006-10-01
In this paper a new rational interpolation with a bi-quadratic denominator is developed to create a space surface using only values of the function being interpolated. The interpolation function has a simple and explicit rational mathematical representation. When the knots are equally spaced, the interpolating function can be expressed in matrix form, and this form has a symmetric property. The concept of integral weights coefficients of the interpolation is given, which describes the "weight" of the interpolation points in the local interpolating region.
A Riemannian framework for orientation distribution function computing.
Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid
2009-01-01
Compared with Diffusion Tensor Imaging (DTI), High Angular Resolution Imaging (HARDI) can better explore the complex microstructure of white matter. Orientation Distribution Function (ODF) is used to describe the probability of the fiber direction. Fisher information metric has been constructed for probability density family in Information Geometry theory and it has been successfully applied for tensor computing in DTI. In this paper, we present a state of the art Riemannian framework for ODF computing based on Information Geometry and sparse representation of orthonormal bases. In this Riemannian framework, the exponential map, logarithmic map and geodesic have closed forms. And the weighted Frechet mean exists uniquely on this manifold. We also propose a novel scalar measurement, named Geometric Anisotropy (GA), which is the Riemannian geodesic distance between the ODF and the isotropic ODF. The Renyi entropy H1/2 of the ODF can be computed from the GA. Moreover, we present an Affine-Euclidean framework and a Log-Euclidean framework so that we can work in an Euclidean space. As an application, Lagrange interpolation on ODF field is proposed based on weighted Frechet mean. We validate our methods on synthetic and real data experiments. Compared with existing Riemannian frameworks on ODF, our framework is model-free. The estimation of the parameters, i.e. Riemannian coordinates, is robust and linear. Moreover it should be noted that our theoretical results can be used for any probability density function (PDF) under an orthonormal basis representation.
THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mignone, A.; Tzeferacos, P.; Zanni, C.
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory,more » or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.« less
1979-01-01
from the Bernoullis was Daniel Bernoulli’s n’est pas la meme dans tous les sens", Exercices addition of the acceleration term to the beam e- de Math...frequencies). improved during 1811-1816 by Germain and Lagrange and, finally, the correct derivation was produced 1852 G. Lame, "Leqons sur la ...de la re- tropic membranes and plates (low frequencies) sistance des solides et des solides d’egale by Euler, Jacques Bernoulli, Germin, Lagrange
The Lagrange Points in a Binary Black Hole System: Applications to Electromagnetic Signatures
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy
2010-01-01
We study the stability and evolution of the Lagrange points L_4 and L-5 in a black hole (BH) binary system, including gravitational radiation. We find that gas and stars can be shepherded in with the BH system until the final moments before merger, providing the fuel for a bright electromagnetic counterpart to a gravitational wave signal. Other astrophysical signatures include the ejection of hyper-velocity stars, gravitational collapse of globular clusters, and the periodic shift of narrow emission lines in AGN.
A novel iterative scheme and its application to differential equations.
Khan, Yasir; Naeem, F; Šmarda, Zdeněk
2014-01-01
The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.
Euler-Lagrange formulas for pseudo-Kähler manifolds
NASA Astrophysics Data System (ADS)
Park, JeongHyeong
2016-01-01
Let c be a characteristic form of degree k which is defined on a Kähler manifold of real dimension m > 2 k. Taking the inner product with the Kähler form Ωk gives a scalar invariant which can be considered as a generalized Lovelock functional. The associated Euler-Lagrange equations are a generalized Einstein-Gauss-Bonnet gravity theory; this theory restricts to the canonical formalism if c =c2 is the second Chern form. We extend previous work studying these equations from the Kähler to the pseudo-Kähler setting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro
2016-07-01
This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.
Equivalences of the multi-indexed orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odake, Satoru
2014-01-15
Multi-indexed orthogonal polynomials describe eigenfunctions of exactly solvable shape-invariant quantum mechanical systems in one dimension obtained by the method of virtual states deletion. Multi-indexed orthogonal polynomials are labeled by a set of degrees of polynomial parts of virtual state wavefunctions. For multi-indexed orthogonal polynomials of Laguerre, Jacobi, Wilson, and Askey-Wilson types, two different index sets may give equivalent multi-indexed orthogonal polynomials. We clarify these equivalences. Multi-indexed orthogonal polynomials with both type I and II indices are proportional to those of type I indices only (or type II indices only) with shifted parameters.
NASA Astrophysics Data System (ADS)
Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.
2012-10-01
Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.
NASA Astrophysics Data System (ADS)
Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo
2018-04-01
In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.
Dirac(-Pauli), Fokker-Planck equations and exceptional Laguerre polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Choon-Lin, E-mail: hcl@mail.tku.edu.tw
2011-04-15
Research Highlights: > Physical examples involving exceptional orthogonal polynomials. > Exceptional polynomials as deformations of classical orthogonal polynomials. > Exceptional polynomials from Darboux-Crum transformation. - Abstract: An interesting discovery in the last two years in the field of mathematical physics has been the exceptional X{sub l} Laguerre and Jacobi polynomials. Unlike the well-known classical orthogonal polynomials which start with constant terms, these new polynomials have lowest degree l = 1, 2, and ..., and yet they form complete set with respect to some positive-definite measure. While the mathematical properties of these new X{sub l} polynomials deserve further analysis, it ismore » also of interest to see if they play any role in physical systems. In this paper we indicate some physical models in which these new polynomials appear as the main part of the eigenfunctions. The systems we consider include the Dirac equations coupled minimally and non-minimally with some external fields, and the Fokker-Planck equations. The systems presented here have enlarged the number of exactly solvable physical systems known so far.« less
Solutions of interval type-2 fuzzy polynomials using a new ranking method
NASA Astrophysics Data System (ADS)
Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani
2015-10-01
A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.
Coherent orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Celeghini, E., E-mail: celeghini@fi.infn.it; Olmo, M.A. del, E-mail: olmo@fta.uva.es
2013-08-15
We discuss a fundamental characteristic of orthogonal polynomials, like the existence of a Lie algebra behind them, which can be added to their other relevant aspects. At the basis of the complete framework for orthogonal polynomials we include thus–in addition to differential equations, recurrence relations, Hilbert spaces and square integrable functions–Lie algebra theory. We start here from the square integrable functions on the open connected subset of the real line whose bases are related to orthogonal polynomials. All these one-dimensional continuous spaces allow, besides the standard uncountable basis (|x〉), for an alternative countable basis (|n〉). The matrix elements that relatemore » these two bases are essentially the orthogonal polynomials: Hermite polynomials for the line and Laguerre and Legendre polynomials for the half-line and the line interval, respectively. Differential recurrence relations of orthogonal polynomials allow us to realize that they determine an infinite-dimensional irreducible representation of a non-compact Lie algebra, whose second order Casimir C gives rise to the second order differential equation that defines the corresponding family of orthogonal polynomials. Thus, the Weyl–Heisenberg algebra h(1) with C=0 for Hermite polynomials and su(1,1) with C=−1/4 for Laguerre and Legendre polynomials are obtained. Starting from the orthogonal polynomials the Lie algebra is extended both to the whole space of the L{sup 2} functions and to the corresponding Universal Enveloping Algebra and transformation group. Generalized coherent states from each vector in the space L{sup 2} and, in particular, generalized coherent polynomials are thus obtained. -- Highlights: •Fundamental characteristic of orthogonal polynomials (OP): existence of a Lie algebra. •Differential recurrence relations of OP determine a unitary representation of a non-compact Lie group. •2nd order Casimir originates a 2nd order differential equation that defines the corresponding OP family. •Generalized coherent polynomials are obtained from OP.« less
Compressible cavitation with stochastic field method
NASA Astrophysics Data System (ADS)
Class, Andreas; Dumond, Julien
2012-11-01
Non-linear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrange particles or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic field method solving pdf transport based on Euler fields has been proposed which eliminates the necessity to mix Euler and Lagrange techniques or prescribed pdf assumptions. In the present work, part of the PhD Design and analysis of a Passive Outflow Reducer relying on cavitation, a first application of the stochastic field method to multi-phase flow and in particular to cavitating flow is presented. The application considered is a nozzle subjected to high velocity flow so that sheet cavitation is observed near the nozzle surface in the divergent section. It is demonstrated that the stochastic field formulation captures the wide range of pdf shapes present at different locations. The method is compatible with finite-volume codes where all existing physical models available for Lagrange techniques, presumed pdf or binning methods can be easily extended to the stochastic field formulation.
NASA Astrophysics Data System (ADS)
Parand, K.; Latifi, S.; Moayeri, M. M.; Delkhosh, M.
2018-05-01
In this study, we have constructed a new numerical approach for solving the time-dependent linear and nonlinear Fokker-Planck equations. In fact, we have discretized the time variable with Crank-Nicolson method and for the space variable, a numerical method based on Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) collocation method is applied. It leads to in solving the equation in a series of time steps and at each time step, the problem is reduced to a problem consisting of a system of algebraic equations that greatly simplifies the problem. One can observe that the proposed method is simple and accurate. Indeed, one of its merits is that it is derivative-free and by proposing a formula for derivative matrices, the difficulty aroused in calculation is overcome, along with that it does not need to calculate the General Lagrange basis and matrices; they have Kronecker property. Linear and nonlinear Fokker-Planck equations are given as examples and the results amply demonstrate that the presented method is very valid, effective, reliable and does not require any restrictive assumptions for nonlinear terms.
Chen, Gang; Song, Yongduan; Lewis, Frank L
2016-05-03
This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.
Simple Proof of Jury Test for Complex Polynomials
NASA Astrophysics Data System (ADS)
Choo, Younseok; Kim, Dongmin
Recently some attempts have been made in the literature to give simple proofs of Jury test for real polynomials. This letter presents a similar result for complex polynomials. A simple proof of Jury test for complex polynomials is provided based on the Rouché's Theorem and a single-parameter characterization of Schur stability property for complex polynomials.
Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph
2014-04-01
Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.
NASA Astrophysics Data System (ADS)
Doha, E. H.
2003-05-01
A formula expressing the Laguerre coefficients of a general-order derivative of an infinitely differentiable function in terms of its original coefficients is proved, and a formula expressing explicitly the derivatives of Laguerre polynomials of any degree and for any order as a linear combination of suitable Laguerre polynomials is deduced. A formula for the Laguerre coefficients of the moments of one single Laguerre polynomial of certain degree is given. Formulae for the Laguerre coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Laguerre coefficients are also obtained. A simple approach in order to build and solve recursively for the connection coefficients between Jacobi-Laguerre and Hermite-Laguerre polynomials is described. An explicit formula for these coefficients between Jacobi and Laguerre polynomials is given, of which the ultra-spherical polynomials of the first and second kinds and Legendre polynomials are important special cases. An analytical formula for the connection coefficients between Hermite and Laguerre polynomials is also obtained.
NASA Astrophysics Data System (ADS)
Chen, Zhixiang; Fu, Bin
This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.
Mafusire, Cosmas; Krüger, Tjaart P J
2018-06-01
The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.
Recent advances in numerical PDEs
NASA Astrophysics Data System (ADS)
Zuev, Julia Michelle
In this thesis, we investigate four neighboring topics, all in the general area of numerical methods for solving Partial Differential Equations (PDEs). Topic 1. Radial Basis Functions (RBF) are widely used for multi-dimensional interpolation of scattered data. This methodology offers smooth and accurate interpolants, which can be further refined, if necessary, by clustering nodes in select areas. We show, however, that local refinements with RBF (in a constant shape parameter [varepsilon] regime) may lead to the oscillatory errors associated with the Runge phenomenon (RP). RP is best known in the case of high-order polynomial interpolation, where its effects can be accurately predicted via Lebesgue constant L (which is based solely on the node distribution). We study the RP and the applicability of Lebesgue constant (as well as other error measures) in RBF interpolation. Mainly, we allow for a spatially variable shape parameter, and demonstrate how it can be used to suppress RP-like edge effects and to improve the overall stability and accuracy. Topic 2. Although not as versatile as RBFs, cubic splines are useful for interpolating grid-based data. In 2-D, we consider a patch representation via Hermite basis functions s i,j ( u, v ) = [Special characters omitted.] h mn H m ( u ) H n ( v ), as opposed to the standard bicubic representation. Stitching requirements for the rectangular non-equispaced grid yield a 2-D tridiagonal linear system AX = B, where X represents the unknown first derivatives. We discover that the standard methods for solving this NxM system do not take advantage of the spline-specific format of the matrix B. We develop an alternative approach using this specialization of the RHS, which allows us to pre-compute coefficients only once, instead of N times. MATLAB implementation of our fast 2-D cubic spline algorithm is provided. We confirm analytically and numerically that for large N ( N > 200), our method is at least 3 times faster than the standard algorithm and is just as accurate. Topic 3. The well-known ADI-FDTD method for solving Maxwell's curl equations is second-order accurate in space/time, unconditionally stable, and computationally efficient. We research Richardson extrapolation -based techniques to improve time discretization accuracy for spatially oversampled ADI-FDTD. A careful analysis of temporal accuracy, computational efficiency, and the algorithm's overall stability is presented. Given the context of wave- type PDEs, we find that only a limited number of extrapolations to the ADI-FDTD method are beneficial, if its unconditional stability is to be preserved. We propose a practical approach for choosing the size of a time step that can be used to improve the efficiency of the ADI-FDTD algorithm, while maintaining its accuracy and stability. Topic 4. Shock waves and their energy dissipation properties are critical to understanding the dynamics controlling the MHD turbulence. Numerical advection algorithms used in MHD solvers (e.g. the ZEUS package) introduce undesirable numerical viscosity. To counteract its effects and to resolve shocks numerically, Richtmyer and von Neumann's artificial viscosity is commonly added to the model. We study shock power by analyzing the influence of both artificial and numerical viscosity on energy decay rates. Also, we analytically characterize the numerical diffusivity of various advection algorithms by quantifying their diffusion coefficients e.
Discrete-time state estimation for stochastic polynomial systems over polynomial observations
NASA Astrophysics Data System (ADS)
Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.
2018-07-01
This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.
Nodal Statistics for the Van Vleck Polynomials
NASA Astrophysics Data System (ADS)
Bourget, Alain
The Van Vleck polynomials naturally arise from the generalized Lamé equation
Modelling of charged satellite motion in Earth's gravitational and magnetic fields
NASA Astrophysics Data System (ADS)
Abd El-Bar, S. E.; Abd El-Salam, F. A.
2018-05-01
In this work Lagrange's planetary equations for a charged satellite subjected to the Earth's gravitational and magnetic force fields are solved. The Earth's gravity, and magnetic and electric force components are obtained and expressed in terms of orbital elements. The variational equations of orbit with the considered model in Keplerian elements are derived. The solution of the problem in a fully analytical way is obtained. The temporal rate of changes of the orbital elements of the spacecraft are integrated via Lagrange's planetary equations and integrals of the normalized Keplerian motion obtained by Ahmed (Astron. J. 107(5):1900, 1994).
NASA Astrophysics Data System (ADS)
Pereira, Paulo; Martin, David
2014-05-01
Fire has important impacts on soil nutrient spatio-temporal distribution (Outeiro et al., 2008). This impact depends on fire severity, topography of the burned area, type of soil and vegetation affected, and the meteorological conditions post-fire. Fire produces a complex mosaic of impacts in soil that can be extremely variable at small plot scale in the space and time. In order to assess and map such a heterogeneous distribution, the test of interpolation methods is fundamental to identify the best estimator and to have a better understanding of soil nutrients spatial distribution. The objective of this work is to identify the short-term spatial variability of water-extractable calcium and magnesium after a low severity grassland fire. The studied area is located near Vilnius (Lithuania) at 54° 42' N, 25° 08 E, 158 masl. Four days after the fire, it was designed in a burned area a plot with 400 m2 (20 x 20 m with 5 m space between sampling points). Twenty five samples from top soil (0-5 cm) were collected immediately after the fire (IAF), 2, 5, 7 and 9 months after the fire (a total of 125 in all sampling dates). The original data of water-extractable calcium and magnesium did not respected the Gaussian distribution, thus a neperian logarithm (ln) was applied in order to normalize data. Significant differences of water-extractable calcium and magnesium among sampling dates were carried out with the Anova One-way test using the ln data. In order to assess the spatial variability of water-extractable calcium and magnesium, we tested several interpolation methods as Ordinary Kriging (OK), Inverse Distance to a Weight (IDW) with the power of 1, 2, 3 and 4, Radial Basis Functions (RBF) - Inverse Multiquadratic (IMT), Multilog (MTG), Multiquadratic (MTQ) Natural Cubic Spline (NCS) and Thin Plate Spline (TPS) - and Local Polynomial (LP) with the power of 1 and 2. Interpolation tests were carried out with Ln data. The best interpolation method was assessed using the cross validation method. Cross-validation was obtained by taking each observation in turn out of the sample pool and estimating from the remaining ones. The errors produced (observed-predicted) are used to evaluate the performance of each method. With these data, the mean error (ME) and root mean square error (RMSE) were calculated. The best method was the one which had the lower RMSE (Pereira et al. in press). The results shown significant differences among sampling dates in the water-extractable calcium (F= 138.78, p< 0.001) and extractable magnesium (F= 160.66; p< 0.001). Water-extractable calcium and magnesium was high IAF decreasing until 7 months after the fire, rising in the last sampling date. Among the tested methods, the most accurate to interpolate the water-extractable calcium were: IAF-IDW1; 2 Months-IDW1; 5 months-OK; 7 Months-IDW4 and 9 Months-IDW3. In relation to water-extractable magnesium the best interpolation techniques were: IAF-IDW2; 2 Months-IDW1; 5 months- IDW3; 7 Months-TPS and 9 Months-IDW1. These results suggested that the spatial variability of these water-extractable is variable with the time. The causes of this variability will be discussed during the presentation. References Outeiro, L., Aspero, F., Ubeda, X. (2008) Geostatistical methods to study spatial variability of soil cation after a prescribed fire and rainfall. Catena, 74: 310-320. Pereira, P., Cerdà, A., Úbeda, X., Mataix-Solera, J. Arcenegui, V., Zavala, L. Modelling the impacts of wildfire on ash thickness in a short-term period, Land Degradation and Development, (In Press), DOI: 10.1002/ldr.2195
Legendre modified moments for Euler's constant
NASA Astrophysics Data System (ADS)
Prévost, Marc
2008-10-01
Polynomial moments are often used for the computation of Gauss quadrature to stabilize the numerical calculation of the orthogonal polynomials, see [W. Gautschi, Computational aspects of orthogonal polynomials, in: P. Nevai (Ed.), Orthogonal Polynomials-Theory and Practice, NATO ASI Series, Series C: Mathematical and Physical Sciences, vol. 294. Kluwer, Dordrecht, 1990, pp. 181-216 [6]; W. Gautschi, On the sensitivity of orthogonal polynomials to perturbations in the moments, Numer. Math. 48(4) (1986) 369-382 [5]; W. Gautschi, On generating orthogonal polynomials, SIAM J. Sci. Statist. Comput. 3(3) (1982) 289-317 [4
On multiple orthogonal polynomials for discrete Meixner measures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorokin, Vladimir N
2010-12-07
The paper examines two examples of multiple orthogonal polynomials generalizing orthogonal polynomials of a discrete variable, meaning thereby the Meixner polynomials. One example is bound up with a discrete Nikishin system, and the other leads to essentially new effects. The limit distribution of the zeros of polynomials is obtained in terms of logarithmic equilibrium potentials and in terms of algebraic curves. Bibliography: 9 titles.
dPotFit: A computer program to fit diatomic molecule spectral data to potential energy functions
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.
2017-01-01
This paper describes program dPotFit, which performs least-squares fits of diatomic molecule spectroscopic data consisting of any combination of microwave, infrared or electronic vibrational bands, fluorescence series, and tunneling predissociation level widths, involving one or more electronic states and one or more isotopologs, and for appropriate systems, second virial coefficient data, to determine analytic potential energy functions defining the observed levels and other properties of each state. Four families of analytical potential functions are available for fitting in the current version of dPotFit: the Expanded Morse Oscillator (EMO) function, the Morse/Long-Range (MLR) function, the Double-Exponential/Long-Range (DELR) function, and the 'Generalized Potential Energy Function' (GPEF) of Šurkus, which incorporates a variety of polynomial functional forms. In addition, dPotFit allows sets of experimental data to be tested against predictions generated from three other families of analytic functions, namely, the 'Hannover Polynomial' (or "X-expansion") function, and the 'Tang-Toennies' and Scoles-Aziz 'HFD', exponential-plus-van der Waals functions, and from interpolation-smoothed pointwise potential energies, such as those obtained from ab initio or RKR calculations. dPotFit also allows the fits to determine atomic-mass-dependent Born-Oppenheimer breakdown functions, and singlet-state Λ-doubling, or 2Σ splitting radial strength functions for one or more electronic states. dPotFit always reports both the 95% confidence limit uncertainty and the "sensitivity" of each fitted parameter; the latter indicates the number of significant digits that must be retained when rounding fitted parameters, in order to ensure that predictions remain in full agreement with experiment. It will also, if requested, apply a "sequential rounding and refitting" procedure to yield a final parameter set defined by a minimum number of significant digits, while ensuring no significant loss of accuracy in the predictions yielded by those parameters.
NASA Astrophysics Data System (ADS)
Benahmed Daho, Sid Ahmed
2010-02-01
The main purpose of this article is to discuss the use of GPS positioning together with a gravimetrically determined geoid, for deriving orthometric heights in the North of Algeria, for which a limited number of GPS stations with known orthometric heights are available, and to check, by the same opportunity, the possibility of substituting the classical spirit levelling. For this work, 247 GPS stations which are homogeneously distributed and collected from the international TYRGEONET project, as well as the local GPS/Levelling surveys, have been used. The GPS/Levelling geoidal heights are obtained by connecting the points to the levelling network while gravimetric geoidal heights were interpolated from the geoid model computed by the Geodetic Laboratory of the National Centre of Spatial Techniques from gravity data supplied by BGI. However, and in order to minimise the discordances, systematic errors and datum inconsistencies between the available height data sets, we have tested two parametric models of corrector surface: a four parameter transformation and a third polynomial model are used to find the adequate functional representation of the correction that should be applied to the gravimetric geoid. The comparisons based on these GPS campaigns prove that a good fit between the geoid model and GPS/levelling data has been reached when the third order polynomial was used as corrector surface and that the orthometric heights can be deducted from GPS observations with an accuracy acceptable for the low order levelling network densification. In addition, the adopted methodology has been also applied for the altimetric auscultation of a storage reservoir situated at 40 km from the town of Oran. The comparison between the computed orthometric heights and observed ones allowed us to affirm that the alternative of levelling by GPS is attractive for this auscultation.
Multi Objective Controller Design for Linear System via Optimal Interpolation
NASA Technical Reports Server (NTRS)
Ozbay, Hitay
1996-01-01
We propose a methodology for the design of a controller which satisfies a set of closed-loop objectives simultaneously. The set of objectives consists of: (1) pole placement, (2) decoupled command tracking of step inputs at steady-state, and (3) minimization of step response transients with respect to envelope specifications. We first obtain a characterization of all controllers placing the closed-loop poles in a prescribed region of the complex plane. In this characterization, the free parameter matrix Q(s) is to be determined to attain objectives (2) and (3). Objective (2) is expressed as determining a Pareto optimal solution to a vector valued optimization problem. The solution of this problem is obtained by transforming it to a scalar convex optimization problem. This solution determines Q(O) and the remaining freedom in choosing Q(s) is used to satisfy objective (3). We write Q(s) = (l/v(s))bar-Q(s) for a prescribed polynomial v(s). Bar-Q(s) is a polynomial matrix which is arbitrary except that Q(O) and the order of bar-Q(s) are fixed. Obeying these constraints bar-Q(s) is now to be 'shaped' to minimize the step response characteristics of specific input/output pairs according to the maximum envelope violations. This problem is expressed as a vector valued optimization problem using the concept of Pareto optimality. We then investigate a scalar optimization problem associated with this vector valued problem and show that it is convex. The organization of the report is as follows. The next section includes some definitions and preliminary lemmas. We then give the problem statement which is followed by a section including a detailed development of the design procedure. We then consider an aircraft control example. The last section gives some concluding remarks. The Appendix includes the proofs of technical lemmas, printouts of computer programs, and figures.
Vedadi, Farhang; Shirani, Shahram
2014-01-01
A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.
Direct calculation of modal parameters from matrix orthogonal polynomials
NASA Astrophysics Data System (ADS)
El-Kafafy, Mahmoud; Guillaume, Patrick
2011-10-01
The object of this paper is to introduce a new technique to derive the global modal parameter (i.e. system poles) directly from estimated matrix orthogonal polynomials. This contribution generalized the results given in Rolain et al. (1994) [5] and Rolain et al. (1995) [6] for scalar orthogonal polynomials to multivariable (matrix) orthogonal polynomials for multiple input multiple output (MIMO) system. Using orthogonal polynomials improves the numerical properties of the estimation process. However, the derivation of the modal parameters from the orthogonal polynomials is in general ill-conditioned if not handled properly. The transformation of the coefficients from orthogonal polynomials basis to power polynomials basis is known to be an ill-conditioned transformation. In this paper a new approach is proposed to compute the system poles directly from the multivariable orthogonal polynomials. High order models can be used without any numerical problems. The proposed method will be compared with existing methods (Van Der Auweraer and Leuridan (1987) [4] Chen and Xu (2003) [7]). For this comparative study, simulated as well as experimental data will be used.
NASA Technical Reports Server (NTRS)
Edwards, T. R. (Inventor)
1985-01-01
Apparatus for doubling the data density rate of an analog to digital converter or doubling the data density storage capacity of a memory deviced is discussed. An interstitial data point midway between adjacent data points in a data stream having an even number of equal interval data points is generated by applying a set of predetermined one-dimensional convolute integer coefficients which can include a set of multiplier coefficients and a normalizer coefficient. Interpolator means apply the coefficients to the data points by weighting equally on each side of the center of the even number of equal interval data points to obtain an interstital point value at the center of the data points. A one-dimensional output data set, which is twice as dense as a one-dimensional equal interval input data set, can be generated where the output data set includes interstitial points interdigitated between adjacent data points in the input data set. The method for generating the set of interstital points is a weighted, nearest-neighbor, non-recursive, moving, smoothing averaging technique, equivalent to applying a polynomial regression calculation to the data set.
Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelouche, Doron; Pozo-Nuñez, Francisco; Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il
A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discretemore » correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.« less
On the geodetic applications of simultaneous range-differencing to LAGEOS
NASA Technical Reports Server (NTRS)
Pablis, E. C.
1982-01-01
The possibility of improving the accuracy of geodetic results by use of simultaneously observed ranges to Lageos, in a differencing mode, from pairs of stations was studied. Simulation tests show that model errors can be effectively minimized by simultaneous range differencing (SRD) for a rather broad class of network satellite pass configurations. The methods of least squares approximation are compared with monomials and Chebyshev polynomials and the cubic spline interpolation. Analysis of three types of orbital biases (radial, along- and across track) shows that radial biases are the ones most efficiently minimized in the SRC mode. The degree to which the other two can be minimized depends on the type of parameters under estimation and the geometry of the problem. Sensitivity analyses of the SRD observation show that for baseline length estimations the most useful data are those collected in a direction parallel to the baseline and at a low elevation. Estimating individual baseline lengths with respect to an assumed but fixed orbit not only decreases the cost, but it further reduces the effects of model biases on the results as opposed to a network solution. Analogous results and conclusions are obtained for the estimates of the coordinates of the pole.
Simple cosmological model with inflation and late times acceleration
NASA Astrophysics Data System (ADS)
Szydłowski, Marek; Stachowski, Aleksander
2018-03-01
In the framework of polynomial Palatini cosmology, we investigate a simple cosmological homogeneous and isotropic model with matter in the Einstein frame. We show that in this model during cosmic evolution, early inflation appears and the accelerating phase of the expansion for the late times. In this frame we obtain the Friedmann equation with matter and dark energy in the form of a scalar field with a potential whose form is determined in a covariant way by the Ricci scalar of the FRW metric. The energy density of matter and dark energy are also parameterized through the Ricci scalar. Early inflation is obtained only for an infinitesimally small fraction of energy density of matter. Between the matter and dark energy, there exists an interaction because the dark energy is decaying. For the characterization of inflation we calculate the slow roll parameters and the constant roll parameter in terms of the Ricci scalar. We have found a characteristic behavior of the time dependence of density of dark energy on the cosmic time following the logistic-like curve which interpolates two almost constant value phases. From the required numbers of N-folds we have found a bound on the model parameter.
NASA Astrophysics Data System (ADS)
Liu, Chun-Ho; Leung, Dennis Y. C.
2006-02-01
This study employed a direct numerical simulation (DNS) technique to contrast the plume behaviours and mixing of passive scalar emitted from line sources (aligned with the spanwise direction) in neutrally and unstably stratified open-channel flows. The DNS model was developed using the Galerkin finite element method (FEM) employing trilinear brick elements with equal-order interpolating polynomials that solved the momentum and continuity equations, together with conservation of energy and mass equations in incompressible flow. The second-order accurate fractional-step method was used to handle the implicit velocity-pressure coupling in incompressible flow. It also segregated the solution to the advection and diffusion terms, which were then integrated in time, respectively, by the explicit third-order accurate Runge-Kutta method and the implicit second-order accurate Crank-Nicolson method. The buoyancy term under unstable stratification was integrated in time explicitly by the first-order accurate Euler method. The DNS FEM model calculated the scalar-plume development and the mean plume path. In particular, it calculated the plume meandering in the wall-normal direction under unstable stratification that agreed well with the laboratory and field measurements, as well as previous modelling results available in literature.
NASA Astrophysics Data System (ADS)
Smoczek, Jaroslaw
2015-10-01
The paper deals with the problem of reducing the residual vibration and limiting the transient oscillations of a flexible and underactuated system with respect to the variation of operating conditions. The comparative study of generalized predictive control (GPC) and fuzzy scheduling scheme developed based on the P1-TS fuzzy theory, local pole placement method and interval analysis of closed-loop system polynomial coefficients is addressed to the problem of flexible crane control. The two alternatives of a GPC-based method are proposed that enable to realize this technique either with or without a sensor of payload deflection. The first control technique is based on the recursive least squares (RLS) method applied to on-line estimate the parameters of a linear parameter varying (LPV) model of a crane dynamic system. The second GPC-based approach is based on a payload deflection feedback estimated using a pendulum model with the parameters interpolated using the P1-TS fuzzy system. Feasibility and applicability of the developed methods were confirmed through experimental verification performed on a laboratory scaled overhead crane.
Gradient Augmented Level Set Method for Two Phase Flow Simulations with Phase Change
NASA Astrophysics Data System (ADS)
Anumolu, C. R. Lakshman; Trujillo, Mario F.
2016-11-01
A sharp interface capturing approach is presented for two-phase flow simulations with phase change. The Gradient Augmented Levelset method is coupled with the two-phase momentum and energy equations to advect the liquid-gas interface and predict heat transfer with phase change. The Ghost Fluid Method (GFM) is adopted for velocity to discretize the advection and diffusion terms in the interfacial region. Furthermore, the GFM is employed to treat the discontinuity in the stress tensor, velocity, and temperature gradient yielding an accurate treatment in handling jump conditions. Thermal convection and diffusion terms are approximated by explicitly identifying the interface location, resulting in a sharp treatment for the energy solution. This sharp treatment is extended to estimate the interfacial mass transfer rate. At the computational cell, a d-cubic Hermite interpolating polynomial is employed to describe the interface location, which is locally fourth-order accurate. This extent of subgrid level description provides an accurate methodology for treating various interfacial processes with a high degree of sharpness. The ability to predict the interface and temperature evolutions accurately is illustrated by comparing numerical results with existing 1D to 3D analytical solutions.
NASA Astrophysics Data System (ADS)
Liu, Changying; Iserles, Arieh; Wu, Xinyuan
2018-03-01
The Klein-Gordon equation with nonlinear potential occurs in a wide range of application areas in science and engineering. Its computation represents a major challenge. The main theme of this paper is the construction of symmetric and arbitrarily high-order time integrators for the nonlinear Klein-Gordon equation by integrating Birkhoff-Hermite interpolation polynomials. To this end, under the assumption of periodic boundary conditions, we begin with the formulation of the nonlinear Klein-Gordon equation as an abstract second-order ordinary differential equation (ODE) and its operator-variation-of-constants formula. We then derive a symmetric and arbitrarily high-order Birkhoff-Hermite time integration formula for the nonlinear abstract ODE. Accordingly, the stability, convergence and long-time behaviour are rigorously analysed once the spatial differential operator is approximated by an appropriate positive semi-definite matrix, subject to suitable temporal and spatial smoothness. A remarkable characteristic of this new approach is that the requirement of temporal smoothness is reduced compared with the traditional numerical methods for PDEs in the literature. Numerical results demonstrate the advantage and efficiency of our time integrators in comparison with the existing numerical approaches.
NASA Technical Reports Server (NTRS)
Kim, Hyoungin; Liou, Meng-Sing
2011-01-01
In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems
Maximum life spiral bevel reduction design
NASA Technical Reports Server (NTRS)
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-01-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems
NASA Technical Reports Server (NTRS)
Casper, Jay; Dorrepaal, J. Mark
1990-01-01
The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.
[Glossary of terms used by radiologists in image processing].
Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P
1995-01-01
We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.
Independence polynomial and matching polynomial of the Koch network
NASA Astrophysics Data System (ADS)
Liao, Yunhua; Xie, Xiaoliang
2015-11-01
The lattice gas model and the monomer-dimer model are two classical models in statistical mechanics. It is well known that the partition functions of these two models are associated with the independence polynomial and the matching polynomial in graph theory, respectively. Both polynomials have been shown to belong to the “#P-complete” class, which indicate the problems are computationally “intractable”. We consider these two polynomials of the Koch networks which are scale-free with small-world effects. Explicit recurrences are derived, and explicit formulae are presented for the number of independent sets of a certain type.
Investigating Trojan Asteroids at the L4/L5 Sun-Earth Lagrange Points
NASA Technical Reports Server (NTRS)
John, K. K.; Graham, L. D.; Abell, P. A.
2015-01-01
Investigations of Earth's Trojan asteroids will have benefits for science, exploration, and resource utilization. By sending a small spacecraft to the Sun-Earth L4 or L5 Lagrange points to investigate near-Earth objects, Earth's Trojan population can be better understood. This could lead to future missions for larger precursor spacecraft as well as human missions. The presence of objects in the Sun-Earth L4 and L5 Lagrange points has long been suspected, and in 2010 NASA's Wide-field Infrared Survey Explorer (WISE) detected a 300 m object. To investigate these Earth Trojan asteroid objects, it is both essential and feasible to send spacecraft to these regions. By exploring a wide field area, a small spacecraft equipped with an IR camera could hunt for Trojan asteroids and other Earth co-orbiting objects at the L4 or L5 Lagrange points in the near-term. By surveying the region, a zeroth-order approximation of the number of objects could be obtained with some rough constraints on their diameters, which may lead to the identification of potential candidates for further study. This would serve as a precursor for additional future robotic and human exploration targets. Depending on the inclination of these potential objects, they could be used as proving areas for future missions in the sense that the delta-V's to get to these targets are relatively low as compared to other rendezvous missions. They can serve as platforms for extended operations in deep space while interacting with a natural object in microgravity. Theoretically, such low inclination Earth Trojan asteroids exist. By sending a spacecraft to L4 or L5, these likely and potentially accessible targets could be identified.
Zhang, Pangzhen; Howell, Kate; Krstic, Mark; Herderich, Markus; Barlow, Edward William R.; Fuentes, Sigfredo
2015-01-01
Rotundone is a sesquiterpene that gives grapes and wine a desirable ‘peppery’ aroma. Previous research has reported that growing grapevines in a cool climate is an important factor that drives rotundone accumulation in grape berries and wine. This study used historical data sets to investigate which weather parameters are mostly influencing rotundone concentration in grape berries and wine. For this purpose, wines produced from 15 vintages from the same Shiraz vineyard (The Old Block, Mount Langi Ghiran, Victoria, Australia) were analysed for rotundone concentration and compared to comprehensive weather data and minimal temperature information. Degree hours were obtained by interpolating available temperature information from the vineyard site using a simple piecewise cubic hermite interpolating polynomial method (PCHIP). Results showed that the highest concentrations of rotundone were consistently found in wines from cool and wet seasons. The Principal Component Analysis (PCA) showed that the concentration of rotundone in wine was negatively correlated with daily solar exposure and grape bunch zone temperature, and positively correlated with vineyard water balance. Finally, models were constructed based on the Gompertz function to describe the dynamics of rotundone concentration in berries through the ripening process according to phenological and thermal times. This characterisation is an important step forward to potentially predict the final quality of the resultant wines based on the evolution of specific compounds in berries according to critical environmental and micrometeorological variables. The modelling techniques described in this paper were able to describe the behaviour of rotundone concentration based on seasonal weather conditions and grapevine phenological stages, and could be potentially used to predict the final rotundone concentration early in future growing seasons. This could enable the adoption of precision irrigation and canopy management strategies to effectively mitigate adverse impacts related to climate change and microclimatic variability, such as heat waves, within a vineyard on wine quality. PMID:26176692
EOS Interpolation and Thermodynamic Consistency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gammel, J. Tinka
2015-11-16
As discussed in LA-UR-08-05451, the current interpolator used by Grizzly, OpenSesame, EOSPAC, and similar routines is the rational function interpolator from Kerley. While the rational function interpolator is well-suited for interpolation on sparse grids with logarithmic spacing and it preserves monotonicity in 1-d, it has some known problems.
Effect of interpolation on parameters extracted from seating interface pressure arrays.
Wininger, Michael; Crane, Barbara
2014-01-01
Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.
NASA Astrophysics Data System (ADS)
Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.
2008-10-01
We study the asymptotic behavior of the zeros of a sequence of polynomials whose weighted norms, with respect to a sequence of weight functions, have the same nth root asymptotic behavior as the weighted norms of certain extremal polynomials. This result is applied to obtain the (contracted) weak zero distribution for orthogonal polynomials with respect to a Sobolev inner product with exponential weights of the form e-[phi](x), giving a unified treatment for the so-called Freud (i.e., when [phi] has polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) cases. In addition, we provide a new proof for the bound of the distance of the zeros to the convex hull of the support for these Sobolev orthogonal polynomials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vignat, C.; Lamberti, P. W.
2009-10-15
Recently, Carinena, et al. [Ann. Phys. 322, 434 (2007)] introduced a new family of orthogonal polynomials that appear in the wave functions of the quantum harmonic oscillator in two-dimensional constant curvature spaces. They are a generalization of the Hermite polynomials and will be called curved Hermite polynomials in the following. We show that these polynomials are naturally related to the relativistic Hermite polynomials introduced by Aldaya et al. [Phys. Lett. A 156, 381 (1991)], and thus are Jacobi polynomials. Moreover, we exhibit a natural bijection between the solutions of the quantum harmonic oscillator on negative curvature spaces and on positivemore » curvature spaces. At last, we show a maximum entropy property for the ground states of these oscillators.« less
Stabilisation of discrete-time polynomial fuzzy systems via a polynomial lyapunov approach
NASA Astrophysics Data System (ADS)
Nasiri, Alireza; Nguang, Sing Kiong; Swain, Akshya; Almakhles, Dhafer
2018-02-01
This paper deals with the problem of designing a controller for a class of discrete-time nonlinear systems which is represented by discrete-time polynomial fuzzy model. Most of the existing control design methods for discrete-time fuzzy polynomial systems cannot guarantee their Lyapunov function to be a radially unbounded polynomial function, hence the global stability cannot be assured. The proposed control design in this paper guarantees a radially unbounded polynomial Lyapunov functions which ensures global stability. In the proposed design, state feedback structure is considered and non-convexity problem is solved by incorporating an integrator into the controller. Sufficient conditions of stability are derived in terms of polynomial matrix inequalities which are solved via SOSTOOLS in MATLAB. A numerical example is presented to illustrate the effectiveness of the proposed controller.
Assignment of boundary conditions in embedded ground water flow models
Leake, S.A.
1998-01-01
Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.
Hadamard Factorization of Stable Polynomials
NASA Astrophysics Data System (ADS)
Loredo-Villalobos, Carlos Arturo; Aguirre-Hernández, Baltazar
2011-11-01
The stable (Hurwitz) polynomials are important in the study of differential equations systems and control theory (see [7] and [19]). A property of these polynomials is related to Hadamard product. Consider two polynomials p,q ∈ R[x]:p(x) = anxn+an-1xn-1+...+a1x+a0q(x) = bmx m+bm-1xm-1+...+b1x+b0the Hadamard product (p × q) is defined as (p×q)(x) = akbkxk+ak-1bk-1xk-1+...+a1b1x+a0b0where k = min(m,n). Some results (see [16]) shows that if p,q ∈R[x] are stable polynomials then (p×q) is stable, also, i.e. the Hadamard product is closed; however, the reciprocal is not always true, that is, not all stable polynomial has a factorization into two stable polynomials the same degree n, if n> 4 (see [15]).In this work we will give some conditions to Hadamard factorization existence for stable polynomials.
NASA Astrophysics Data System (ADS)
Doha, E. H.
2004-01-01
Formulae expressing explicitly the Jacobi coefficients of a general-order derivative (integral) of an infinitely differentiable function in terms of its original expansion coefficients, and formulae for the derivatives (integrals) of Jacobi polynomials in terms of Jacobi polynomials themselves are stated. A formula for the Jacobi coefficients of the moments of one single Jacobi polynomial of certain degree is proved. Another formula for the Jacobi coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its original expanded coefficients is also given. A simple approach in order to construct and solve recursively for the connection coefficients between Jacobi-Jacobi polynomials is described. Explicit formulae for these coefficients between ultraspherical and Jacobi polynomials are deduced, of which the Chebyshev polynomials of the first and second kinds and Legendre polynomials are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Jacobi and Hermite-Jacobi are developed.
Stable Numerical Approach for Fractional Delay Differential Equations
NASA Astrophysics Data System (ADS)
Singh, Harendra; Pandey, Rajesh K.; Baleanu, D.
2017-12-01
In this paper, we present a new stable numerical approach based on the operational matrix of integration of Jacobi polynomials for solving fractional delay differential equations (FDDEs). The operational matrix approach converts the FDDE into a system of linear equations, and hence the numerical solution is obtained by solving the linear system. The error analysis of the proposed method is also established. Further, a comparative study of the approximate solutions is provided for the test examples of the FDDE by varying the values of the parameters in the Jacobi polynomials. As in special case, the Jacobi polynomials reduce to the well-known polynomials such as (1) Legendre polynomial, (2) Chebyshev polynomial of second kind, (3) Chebyshev polynomial of third and (4) Chebyshev polynomial of fourth kind respectively. Maximum absolute error and root mean square error are calculated for the illustrated examples and presented in form of tables for the comparison purpose. Numerical stability of the presented method with respect to all four kind of polynomials are discussed. Further, the obtained numerical results are compared with some known methods from the literature and it is observed that obtained results from the proposed method is better than these methods.
Percolation critical polynomial as a graph invariant
Scullard, Christian R.
2012-10-18
Every lattice for which the bond percolation critical probability can be found exactly possesses a critical polynomial, with the root in [0; 1] providing the threshold. Recent work has demonstrated that this polynomial may be generalized through a definition that can be applied on any periodic lattice. The polynomial depends on the lattice and on its decomposition into identical finite subgraphs, but once these are specified, the polynomial is essentially unique. On lattices for which the exact percolation threshold is unknown, the polynomials provide approximations for the critical probability with the estimates appearing to converge to the exact answer withmore » increasing subgraph size. In this paper, I show how the critical polynomial can be viewed as a graph invariant like the Tutte polynomial. In particular, the critical polynomial is computed on a finite graph and may be found using the deletion-contraction algorithm. This allows calculation on a computer, and I present such results for the kagome lattice using subgraphs of up to 36 bonds. For one of these, I find the prediction p c = 0:52440572:::, which differs from the numerical value, p c = 0:52440503(5), by only 6:9 X 10 -7.« less
On Certain Wronskians of Multiple Orthogonal Polynomials
NASA Astrophysics Data System (ADS)
Zhang, Lun; Filipuk, Galina
2014-11-01
We consider determinants of Wronskian type whose entries are multiple orthogonal polynomials associated with a path connecting two multi-indices. By assuming that the weight functions form an algebraic Chebyshev (AT) system, we show that the polynomials represented by the Wronskians keep a constant sign in some cases, while in some other cases oscillatory behavior appears, which generalizes classical results for orthogonal polynomials due to Karlin and Szegő. There are two applications of our results. The first application arises from the observation that the m-th moment of the average characteristic polynomials for multiple orthogonal polynomial ensembles can be expressed as a Wronskian of the type II multiple orthogonal polynomials. Hence, it is straightforward to obtain the distinct behavior of the moments for odd and even m in a special multiple orthogonal ensemble - the AT ensemble. As the second application, we derive some Turán type inequalities for m! ultiple Hermite and multiple Laguerre polynomials (of two kinds). Finally, we study numerically the geometric configuration of zeros for the Wronskians of these multiple orthogonal polynomials. We observe that the zeros have regular configurations in the complex plane, which might be of independent interest.
Riemann-Liouville Fractional Calculus of Certain Finite Class of Classical Orthogonal Polynomials
NASA Astrophysics Data System (ADS)
Malik, Pradeep; Swaminathan, A.
2010-11-01
In this work we consider certain class of classical orthogonal polynomials defined on the positive real line. These polynomials have their weight function related to the probability density function of F distribution and are finite in number up to orthogonality. We generalize these polynomials for fractional order by considering the Riemann-Liouville type operator on these polynomials. Various properties like explicit representation in terms of hypergeometric functions, differential equations, recurrence relations are derived.
A Fluid Structure Algorithm with Lagrange Multipliers to Model Free Swimming
NASA Astrophysics Data System (ADS)
Sahin, Mehmet; Dilek, Ezgi
2017-11-01
A new monolithic approach is prosed to solve the fluid-structure interaction (FSI) problem with Lagrange multipliers in order to model free swimming/flying. In the present approach, the fluid domain is modeled by the incompressible Navier-Stokes equations and discretized using an Arbitrary Lagrangian-Eulerian (ALE) formulation based on the stable side-centered unstructured finite volume method. The solid domain is modeled by the constitutive laws for the nonlinear Saint Venant-Kirchhoff material and the classical Galerkin finite element method is used to discretize the governing equations in a Lagrangian frame. In order to impose the body motion/deformation, the distance between the constraint pair nodes is imposed using the Lagrange multipliers, which is independent from the frame of reference. The resulting algebraic linear equations are solved in a fully coupled manner using a dual approach (null space method). The present numerical algorithm is initially validated for the classical FSI benchmark problems and then applied to the free swimming of three linked ellipses. The authors are grateful for the use of the computing resources provided by the National Center for High Performance Computing (UYBHM) under Grant Number 10752009 and the computing facilities at TUBITAK-ULAKBIM, High Performance and Grid Computing Center.
Prediction of a Densely Loaded Particle-Laden Jet using a Euler-Lagrange Dense Spray Model
NASA Astrophysics Data System (ADS)
Pakseresht, Pedram; Apte, Sourabh V.
2017-11-01
Modeling of a dense spray regime using an Euler-Lagrange discrete-element approach is challenging because of local high volume loading. A subgrid cluster of droplets can lead to locally high void fractions for the disperse phase. Under these conditions, spatio-temporal changes in the carrier phase volume fractions, which are commonly neglected in spray simulations in an Euler-Lagrange two-way coupling model, could become important. Accounting for the carrier phase volume fraction variations, leads to zero-Mach number, variable density governing equations. Using pressure-based solvers, this gives rise to a source term in the pressure Poisson equation and a non-divergence free velocity field. To test the validity and predictive capability of such an approach, a round jet laden with solid particles is investigated using Direct Numerical Simulation and compared with available experimental data for different loadings. Various volume fractions spanning from dilute to dense regimes are investigated with and without taking into account the volume displacement effects. The predictions of the two approaches are compared and analyzed to investigate the effectiveness of the dense spray model. Financial support was provided by National Aeronautics and Space Administration (NASA).
Poulain, Christophe A.; Finlayson, Bruce A.; Bassingthwaighte, James B.
2010-01-01
The analysis of experimental data obtained by the multiple-indicator method requires complex mathematical models for which capillary blood-tissue exchange (BTEX) units are the building blocks. This study presents a new, nonlinear, two-region, axially distributed, single capillary, BTEX model. A facilitated transporter model is used to describe mass transfer between plasma and intracellular spaces. To provide fast and accurate solutions, numerical techniques suited to nonlinear convection-dominated problems are implemented. These techniques are the random choice method, an explicit Euler-Lagrange scheme, and the MacCormack method with and without flux correction. The accuracy of the numerical techniques is demonstrated, and their efficiencies are compared. The random choice, Euler-Lagrange and plain MacCormack method are the best numerical techniques for BTEX modeling. However, the random choice and Euler-Lagrange methods are preferred over the MacCormack method because they allow for the derivation of a heuristic criterion that makes the numerical methods stable without degrading their efficiency. Numerical solutions are also used to illustrate some nonlinear behaviors of the model and to show how the new BTEX model can be used to estimate parameters from experimental data. PMID:9146808
Modified Interior Distance Functions (Theory and Methods)
NASA Technical Reports Server (NTRS)
Polyak, Roman A.
1995-01-01
In this paper we introduced and developed the theory of Modified Interior Distance Functions (MIDF's). The MIDF is a Classical Lagrangian (CL) for a constrained optimization problem which is equivalent to the initial one and can be obtained from the latter by monotone transformation both the objective function and constraints. In contrast to the Interior Distance Functions (IDF's), which played a fundamental role in Interior Point Methods (IPM's), the MIDF's are defined on an extended feasible set and along with center, have two extra tools, which control the computational process: the barrier parameter and the vector of Lagrange multipliers. The extra tools allow to attach to the MEDF's very important properties of Augmented Lagrangeans. One can consider the MIDFs as Interior Augmented Lagrangeans. It makes MIDF's similar in spirit to Modified Barrier Functions (MBF's), although there is a fundamental difference between them both in theory and methods. Based on MIDF's theory, Modified Center Methods (MCM's) have been developed and analyzed. The MCM's find an unconstrained minimizer in primal space and update the Lagrange multipliers, while both the center and the barrier parameter can be fixed or updated at each step. The MCM's convergence was investigated, and their rate of convergence was estimated. The extension of the feasible set and the special role of the Lagrange multipliers allow to develop MCM's, which produce, in case of nondegenerate constrained optimization, a primal and dual sequences that converge to the primal-dual solutions with linear rate, even when both the center and the barrier parameter are fixed. Moreover, every Lagrange multipliers update shrinks the distance to the primal dual solution by a factor 0 less than gamma less than 1 which can be made as small as one wants by choosing a fixed interior point as a 'center' and a fixed but large enough barrier parameter. The numericai realization of MCM leads to the Newton MCM (NMCM). The approximation for the primal minimizer one finds by Newton Method followed by the Lagrange multipliers update. Due to the MCM convergence, when both the center and the barrier parameter are fixed, the condition of the MDF Hessism and the neighborhood of the primal ninimizer where Newton method is 'well' defined remains stable. It contributes to both the complexity and the numerical stability of the NMCM.
NASA Astrophysics Data System (ADS)
Hounga, C.; Hounkonnou, M. N.; Ronveaux, A.
2006-10-01
In this paper, we give Laguerre-Freud equations for the recurrence coefficients of discrete semi-classical orthogonal polynomials of class two, when the polynomials in the Pearson equation are of the same degree. The case of generalized Charlier polynomials is also presented.
The Gibbs Phenomenon for Series of Orthogonal Polynomials
ERIC Educational Resources Information Center
Fay, T. H.; Kloppers, P. Hendrik
2006-01-01
This note considers the four classes of orthogonal polynomials--Chebyshev, Hermite, Laguerre, Legendre--and investigates the Gibbs phenomenon at a jump discontinuity for the corresponding orthogonal polynomial series expansions. The perhaps unexpected thing is that the Gibbs constant that arises for each class of polynomials appears to be the same…
Determinants with orthogonal polynomial entries
NASA Astrophysics Data System (ADS)
Ismail, Mourad E. H.
2005-06-01
We use moment representations of orthogonal polynomials to evaluate the corresponding Hankel determinants formed by the orthogonal polynomials. We also study the Hankel determinants which start with pn on the top left-hand corner. As examples we evaluate the Hankel determinants whose entries are q-ultraspherical or Al-Salam-Chihara polynomials.
Application of Time-Frequency Domain Transform to Three-Dimensional Interpolation of Medical Images.
Lv, Shengqing; Chen, Yimin; Li, Zeyu; Lu, Jiahui; Gao, Mingke; Lu, Rongrong
2017-11-01
Medical image three-dimensional (3D) interpolation is an important means to improve the image effect in 3D reconstruction. In image processing, the time-frequency domain transform is an efficient method. In this article, several time-frequency domain transform methods are applied and compared in 3D interpolation. And a Sobel edge detection and 3D matching interpolation method based on wavelet transform is proposed. We combine wavelet transform, traditional matching interpolation methods, and Sobel edge detection together in our algorithm. What is more, the characteristics of wavelet transform and Sobel operator are used. They deal with the sub-images of wavelet decomposition separately. Sobel edge detection 3D matching interpolation method is used in low-frequency sub-images under the circumstances of ensuring high frequency undistorted. Through wavelet reconstruction, it can get the target interpolation image. In this article, we make 3D interpolation of the real computed tomography (CT) images. Compared with other interpolation methods, our proposed method is verified to be effective and superior.
Research progress and hotspot analysis of spatial interpolation
NASA Astrophysics Data System (ADS)
Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li
2018-02-01
In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.
NASA Astrophysics Data System (ADS)
Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang
2010-11-01
This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.
Large Eddy Simulation (LES) of Particle-Laden Temporal Mixing Layers
NASA Technical Reports Server (NTRS)
Bellan, Josette; Radhakrishnan, Senthilkumaran
2012-01-01
High-fidelity models of plume-regolith interaction are difficult to develop because of the widely disparate flow conditions that exist in this process. The gas in the core of a rocket plume can often be modeled as a time-dependent, high-temperature, turbulent, reacting continuum flow. However, due to the vacuum conditions on the lunar surface, the mean molecular path in the outer parts of the plume is too long for the continuum assumption to remain valid. Molecular methods are better suited to model this region of the flow. Finally, granular and multiphase flow models must be employed to describe the dust and debris that are displaced from the surface, as well as how a crater is formed in the regolith. At present, standard commercial CFD (computational fluid dynamics) software is not capable of coupling each of these flow regimes to provide an accurate representation of this flow process, necessitating the development of custom software. This software solves the fluid-flow-governing equations in an Eulerian framework, coupled with the particle transport equations that are solved in a Lagrangian framework. It uses a fourth-order explicit Runge-Kutta scheme for temporal integration, an eighth-order central finite differencing scheme for spatial discretization. The non-linear terms in the governing equations are recast in cubic skew symmetric form to reduce aliasing error. The second derivative viscous terms are computed using eighth-order narrow stencils that provide better diffusion for the highest resolved wave numbers. A fourth-order Lagrange interpolation procedure is used to obtain gas-phase variable values at the particle locations.
From sequences to polynomials and back, via operator orderings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amdeberhan, Tewodros, E-mail: tamdeber@tulane.edu; Dixit, Atul, E-mail: adixit@tulane.edu; Moll, Victor H., E-mail: vhm@tulane.edu
2013-12-15
Bender and Dunne [“Polynomials and operator orderings,” J. Math. Phys. 29, 1727–1731 (1988)] showed that linear combinations of words q{sup k}p{sup n}q{sup n−k}, where p and q are subject to the relation qp − pq = ı, may be expressed as a polynomial in the symbol z=1/2 (qp+pq). Relations between such polynomials and linear combinations of the transformed coefficients are explored. In particular, examples yielding orthogonal polynomials are provided.
[Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].
Chen, Hao; Yu, Haizhong
2014-04-01
Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.
NASA Technical Reports Server (NTRS)
Jayroe, R. R., Jr.
1976-01-01
Geographical correction effects on LANDSAT image data are identified, using the nearest neighbor, bilinear interpolation and bicubic interpolation techniques. Potential impacts of registration on image compression and classification are explored.
Extending Romanovski polynomials in quantum mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quesne, C.
2013-12-15
Some extensions of the (third-class) Romanovski polynomials (also called Romanovski/pseudo-Jacobi polynomials), which appear in bound-state wavefunctions of rationally extended Scarf II and Rosen-Morse I potentials, are considered. For the former potentials, the generalized polynomials satisfy a finite orthogonality relation, while for the latter an infinite set of relations among polynomials with degree-dependent parameters is obtained. Both types of relations are counterparts of those known for conventional polynomials. In the absence of any direct information on the zeros of the Romanovski polynomials present in denominators, the regularity of the constructed potentials is checked by taking advantage of the disconjugacy properties ofmore » second-order differential equations of Schrödinger type. It is also shown that on going from Scarf I to Scarf II or from Rosen-Morse II to Rosen-Morse I potentials, the variety of rational extensions is narrowed down from types I, II, and III to type III only.« less
Polynomial solutions of the Monge-Ampère equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aminov, Yu A
2014-11-30
The question of the existence of polynomial solutions to the Monge-Ampère equation z{sub xx}z{sub yy}−z{sub xy}{sup 2}=f(x,y) is considered in the case when f(x,y) is a polynomial. It is proved that if f is a polynomial of the second degree, which is positive for all values of its arguments and has a positive squared part, then no polynomial solution exists. On the other hand, a solution which is not polynomial but is analytic in the whole of the x, y-plane is produced. Necessary and sufficient conditions for the existence of polynomial solutions of degree up to 4 are found and methods for the construction ofmore » such solutions are indicated. An approximation theorem is proved. Bibliography: 10 titles.« less
Solving the interval type-2 fuzzy polynomial equation using the ranking method
NASA Astrophysics Data System (ADS)
Rahman, Nurhakimah Ab.; Abdullah, Lazim
2014-07-01
Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.
Parallel multigrid smoothing: polynomial versus Gauss-Seidel
NASA Astrophysics Data System (ADS)
Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray
2003-07-01
Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.
NASA Technical Reports Server (NTRS)
Wood, C. A.
1974-01-01
For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.
Classical and neural methods of image sequence interpolation
NASA Astrophysics Data System (ADS)
Skoneczny, Slawomir; Szostakowski, Jaroslaw
2001-08-01
An image interpolation problem is often encountered in many areas. Some examples are interpolation for coding/decoding process for transmission purposes, reconstruction a full frame from two interlaced sub-frames in normal TV or HDTV, or reconstruction of missing frames in old destroyed cinematic sequences. In this paper an overview of interframe interpolation methods is presented. Both direct as well as motion compensated interpolation techniques are given by examples. The used methodology can also be either classical or based on neural networks depending on demand of a specific interpolation problem solving person.
A note on the zeros of Freud-Sobolev orthogonal polynomials
NASA Astrophysics Data System (ADS)
Moreno-Balcazar, Juan J.
2007-10-01
We prove that the zeros of a certain family of Sobolev orthogonal polynomials involving the Freud weight function e-x4 on are real, simple, and interlace with the zeros of the Freud polynomials, i.e., those polynomials orthogonal with respect to the weight function e-x4. Some numerical examples are shown.
Optimal Chebyshev polynomials on ellipses in the complex plane
NASA Technical Reports Server (NTRS)
Fischer, Bernd; Freund, Roland
1989-01-01
The design of iterative schemes for sparse matrix computations often leads to constrained polynomial approximation problems on sets in the complex plane. For the case of ellipses, we introduce a new class of complex polynomials which are in general very good approximations to the best polynomials and even optimal in most cases.
Ding, Qian; Wang, Yong; Zhuang, Dafang
2018-04-15
The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for unevenly distributed soil PTE data in mining areas. Copyright © 2018 Elsevier Ltd. All rights reserved.
A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE
NASA Technical Reports Server (NTRS)
Truong, T. K.
1994-01-01
This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.
AKLSQF - LEAST SQUARES CURVE FITTING
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
Papadopoulos, Anthony
2009-01-01
The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.
Selection of Optimal Auxiliary Soil Nutrient Variables for Cokriging Interpolation
Song, Genxin; Zhang, Jing; Wang, Ke
2014-01-01
In order to explore the selection of the best auxiliary variables (BAVs) when using the Cokriging method for soil attribute interpolation, this paper investigated the selection of BAVs from terrain parameters, soil trace elements, and soil nutrient attributes when applying Cokriging interpolation to soil nutrients (organic matter, total N, available P, and available K). In total, 670 soil samples were collected in Fuyang, and the nutrient and trace element attributes of the soil samples were determined. Based on the spatial autocorrelation of soil attributes, the Digital Elevation Model (DEM) data for Fuyang was combined to explore the coordinate relationship among terrain parameters, trace elements, and soil nutrient attributes. Variables with a high correlation to soil nutrient attributes were selected as BAVs for Cokriging interpolation of soil nutrients, and variables with poor correlation were selected as poor auxiliary variables (PAVs). The results of Cokriging interpolations using BAVs and PAVs were then compared. The results indicated that Cokriging interpolation with BAVs yielded more accurate results than Cokriging interpolation with PAVs (the mean absolute error of BAV interpolation results for organic matter, total N, available P, and available K were 0.020, 0.002, 7.616, and 12.4702, respectively, and the mean absolute error of PAV interpolation results were 0.052, 0.037, 15.619, and 0.037, respectively). The results indicated that Cokriging interpolation with BAVs can significantly improve the accuracy of Cokriging interpolation for soil nutrient attributes. This study provides meaningful guidance and reference for the selection of auxiliary parameters for the application of Cokriging interpolation to soil nutrient attributes. PMID:24927129
Monotonicity preserving splines using rational cubic Timmer interpolation
NASA Astrophysics Data System (ADS)
Zakaria, Wan Zafira Ezza Wan; Alimin, Nur Safiyah; Ali, Jamaludin Md
2017-08-01
In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data, which preserves certain shape properties of the data such as positivity, monotonicity or convexity. The required curve has to be a smooth shape-preserving interpolant. In this paper a rational cubic spline in Timmer representation is developed to generate interpolant that preserves monotonicity with visually pleasing curve. To control the shape of the interpolant three parameters are introduced. The shape parameters in the description of the rational cubic interpolant are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolant are derived and visually the proposed rational cubic Timmer interpolant gives very pleasing results.
Stochastic Estimation via Polynomial Chaos
2015-10-01
AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic
Vehicle Sprung Mass Estimation for Rough Terrain
2011-03-01
distributions are greater than zero. The multivariate polynomials are functions of the Legendre polynomials (Poularikas (1999...developed methods based on polynomial chaos theory and on the maximum likelihood approach to estimate the most likely value of the vehicle sprung...mass. The polynomial chaos estimator is compared to benchmark algorithms including recursive least squares, recursive total least squares, extended
Degenerate r-Stirling Numbers and r-Bell Polynomials
NASA Astrophysics Data System (ADS)
Kim, T.; Yao, Y.; Kim, D. S.; Jang, G.-W.
2018-01-01
The purpose of this paper is to exploit umbral calculus in order to derive some properties, recurrence relations, and identities related to the degenerate r-Stirling numbers of the second kind and the degenerate r-Bell polynomials. Especially, we will express the degenerate r-Bell polynomials as linear combinations of many well-known families of special polynomials.
From Chebyshev to Bernstein: A Tour of Polynomials Small and Large
ERIC Educational Resources Information Center
Boelkins, Matthew; Miller, Jennifer; Vugteveen, Benjamin
2006-01-01
Consider the family of monic polynomials of degree n having zeros at -1 and +1 and all their other real zeros in between these two values. This article explores the size of these polynomials using the supremum of the absolute value on [-1, 1], showing that scaled Chebyshev and Bernstein polynomials give the extremes.
LIP: The Livermore Interpolation Package, Version 1.4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fritsch, F N
2011-07-06
This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the 'LEOS Interpolation Package'. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a 'LIP interpolation object' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as 'partial setup' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less
LIP: The Livermore Interpolation Package, Version 1.3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fritsch, F N
2011-01-04
This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the ''LEOS Interpolation Package''. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewisemore » bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a ''LIP interpolation object'' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as ''partial setup'' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an arbitrary number of points. The first section of this report describes the overall design of the package, including both forward and inverse interpolation. Sections 2-6 describe each interpolation method in detail. The software that implements this design is summarized function-by-function in Section 7. For a complete example of package usage, refer to Section 8. The report concludes with a few brief notes on possible software enhancements. For guidance on adding other functional forms to LIP, refer to Appendix B. The reader who is primarily interested in using LIP to solve a problem should skim Section 1, then skip to Sections 7.1-4. Finally, jump ahead to Section 8 and study the example. The remaining sections can be referred to in case more details are desired. Changes since version 1.1 of this document include the new Section 3.2.1 that discusses derivative estimation and new Section 6 that discusses the birational interpolation method. Section numbers following the latter have been modified accordingly.« less
Novel view synthesis by interpolation over sparse examples
NASA Astrophysics Data System (ADS)
Liang, Bodong; Chung, Ronald C.
2006-01-01
Novel view synthesis (NVS) is an important problem in image rendering. It involves synthesizing an image of a scene at any specified (novel) viewpoint, given some images of the scene at a few sample viewpoints. The general understanding is that the solution should bypass explicit 3-D reconstruction of the scene. As it is, the problem has a natural tie to interpolation, despite that mainstream efforts on the problem have been adopting formulations otherwise. Interpolation is about finding the output of a function f(x) for any specified input x, given a few input-output pairs {(xi,fi):i=1,2,3,...,n} of the function. If the input x is the viewpoint, and f(x) is the image, the interpolation problem becomes exactly NVS. We treat the NVS problem using the interpolation formulation. In particular, we adopt the example-based everything or interpolation (EBI) mechanism-an established mechanism for interpolating or learning functions from examples. EBI has all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples. We point out that EBI, however, has difficulty in interpolating certain classes of functions, including the image function in the NVS problem. We propose an extension of the mechanism for overcoming the limitation. We also present how the extended interpolation mechanism could be used to synthesize images at novel viewpoints. Real image results show that the mechanism has promising performance, even with very few example images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez-Sendino, J. E.; del Olmo, M. A.
2010-12-23
We present an umbral operator version of the classical orthogonal polynomials. We obtain three families which are the umbral counterpart of the Jacobi, Laguerre and Hermite polynomials in the classical case.
NASA Astrophysics Data System (ADS)
Amengonu, Yawo H.; Kakad, Yogendra P.
2014-07-01
Quasivelocity techniques such as Maggi's and Boltzmann-Hamel's equations eliminate Lagrange multipliers from the beginning as opposed to the Euler-Lagrange method where one has to solve for the n configuration variables and the multipliers as functions of time when there are m nonholonomic constraints. Maggi's equation produces n second-order differential equations of which (n-m) are derived using (n-m) independent quasivelocities and the time derivative of the m kinematic constraints which add the remaining m second order differential equations. This technique is applied to derive the dynamics of a differential mobile robot and a controller which takes into account these dynamics is developed.
Han, Zifa; Leung, Chi Sing; So, Hing Cheung; Constantinides, Anthony George
2017-08-15
A commonly used measurement model for locating a mobile source is time-difference-of-arrival (TDOA). As each TDOA measurement defines a hyperbola, it is not straightforward to compute the mobile source position due to the nonlinear relationship in the measurements. This brief exploits the Lagrange programming neural network (LPNN), which provides a general framework to solve nonlinear constrained optimization problems, for the TDOA-based localization. The local stability of the proposed LPNN solution is also analyzed. Simulation results are included to evaluate the localization accuracy of the LPNN scheme by comparing with the state-of-the-art methods and the optimality benchmark of Cramér-Rao lower bound.
Plasmonic Roche lobe in metal-dielectric-metal structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiu, Ruei-Cheng; Lan, Yung-Chiang
2013-07-15
This study investigates a plasmonic Roche lobe that is based on a metal-dielectric-metal (MDM) structure using finite-difference time-domain simulations and theoretical analyses. The effective refractive index of the MDM structure has two centers and is inversely proportional to the distance from the position of interest to the centers, in a manner that is analogous to the gravitational potential in a two-star system. The motion of surface plasmons (SPs) strongly depends on the ratio of permittivities at the two centers. The Lagrange point is an unstable equilibrium point for SPs that propagate in the system. After the SPs have passed throughmore » the Lagrange point, their spread drastically increases.« less
Domain decomposition methods for nonconforming finite element spaces of Lagrange-type
NASA Technical Reports Server (NTRS)
Cowsar, Lawrence C.
1993-01-01
In this article, we consider the application of three popular domain decomposition methods to Lagrange-type nonconforming finite element discretizations of scalar, self-adjoint, second order elliptic equations. The additive Schwarz method of Dryja and Widlund, the vertex space method of Smith, and the balancing method of Mandel applied to nonconforming elements are shown to converge at a rate no worse than their applications to the standard conforming piecewise linear Galerkin discretization. Essentially, the theory for the nonconforming elements is inherited from the existing theory for the conforming elements with only modest modification by constructing an isomorphism between the nonconforming finite element space and a space of continuous piecewise linear functions.
Design and Use of a Learning Object for Finding Complex Polynomial Roots
ERIC Educational Resources Information Center
Benitez, Julio; Gimenez, Marcos H.; Hueso, Jose L.; Martinez, Eulalia; Riera, Jaime
2013-01-01
Complex numbers are essential in many fields of engineering, but students often fail to have a natural insight of them. We present a learning object for the study of complex polynomials that graphically shows that any complex polynomials has a root and, furthermore, is useful to find the approximate roots of a complex polynomial. Moreover, we…
Extending a Property of Cubic Polynomials to Higher-Degree Polynomials
ERIC Educational Resources Information Center
Miller, David A.; Moseley, James
2012-01-01
In this paper, the authors examine a property that holds for all cubic polynomials given two zeros. This property is discovered after reviewing a variety of ways to determine the equation of a cubic polynomial given specific conditions through algebra and calculus. At the end of the article, they will connect the property to a very famous method…
SAR image formation with azimuth interpolation after azimuth transform
Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM
2008-07-08
Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.
Computing Galois Groups of Eisenstein Polynomials Over P-adic Fields
NASA Astrophysics Data System (ADS)
Milstead, Jonathan
The most efficient algorithms for computing Galois groups of polynomials over global fields are based on Stauduhar's relative resolvent method. These methods are not directly generalizable to the local field case, since they require a field that contains the global field in which all roots of the polynomial can be approximated. We present splitting field-independent methods for computing the Galois group of an Eisenstein polynomial over a p-adic field. Our approach is to combine information from different disciplines. We primarily, make use of the ramification polygon of the polynomial, which is the Newton polygon of a related polynomial. This allows us to quickly calculate several invariants that serve to reduce the number of possible Galois groups. Algorithms by Greve and Pauli very efficiently return the Galois group of polynomials where the ramification polygon consists of one segment as well as information about the subfields of the stem field. Second, we look at the factorization of linear absolute resolvents to further narrow the pool of possible groups.
On polynomial preconditioning for indefinite Hermitian matrices
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1989-01-01
The minimal residual method is studied combined with polynomial preconditioning for solving large linear systems (Ax = b) with indefinite Hermitian coefficient matrices (A). The standard approach for choosing the polynomial preconditioners leads to preconditioned systems which are positive definite. Here, a different strategy is studied which leaves the preconditioned coefficient matrix indefinite. More precisely, the polynomial preconditioner is designed to cluster the positive, resp. negative eigenvalues of A around 1, resp. around some negative constant. In particular, it is shown that such indefinite polynomial preconditioners can be obtained as the optimal solutions of a certain two parameter family of Chebyshev approximation problems. Some basic results are established for these approximation problems and a Remez type algorithm is sketched for their numerical solution. The problem of selecting the parameters such that the resulting indefinite polynomial preconditioners speeds up the convergence of minimal residual method optimally is also addressed. An approach is proposed based on the concept of asymptotic convergence factors. Finally, some numerical examples of indefinite polynomial preconditioners are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genest, Vincent X.; Vinet, Luc; Zhedanov, Alexei
The algebra H of the dual -1 Hahn polynomials is derived and shown to arise in the Clebsch-Gordan problem of sl{sub -1}(2). The dual -1 Hahn polynomials are the bispectral polynomials of a discrete argument obtained from the q{yields}-1 limit of the dual q-Hahn polynomials. The Hopf algebra sl{sub -1}(2) has four generators including an involution, it is also a q{yields}-1 limit of the quantum algebra sl{sub q}(2) and furthermore, the dynamical algebra of the parabose oscillator. The algebra H, a two-parameter generalization of u(2) with an involution as additional generator, is first derived from the recurrence relation of themore » -1 Hahn polynomials. It is then shown that H can be realized in terms of the generators of two added sl{sub -1}(2) algebras, so that the Clebsch-Gordan coefficients of sl{sub -1}(2) are dual -1 Hahn polynomials. An irreducible representation of H involving five-diagonal matrices and connected to the difference equation of the dual -1 Hahn polynomials is constructed.« less
3-d interpolation in object perception: evidence from an objective performance paradigm.
Kellman, Philip J; Garrigan, Patrick; Shipley, Thomas F; Yin, Carol; Machado, Liana
2005-06-01
Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D interpolation and tested a new theory of 3-D contour interpolation, termed 3-D relatability. The theory indicates for a given edge which orientations and positions of other edges in space may be connected to it by interpolation. Results of 5 experiments showed that processing of orientation relations in 3-D relatable displays was superior to processing in 3-D nonrelatable displays and that these effects depended on object formation. 3-D interpolation and 3-D relatabilty are discussed in terms of their implications for computational and neural models of object perception, which have typically been based on 2-D-orientation-sensitive units. ((c) 2005 APA, all rights reserved).
Interbasis expansions in the Zernike system
NASA Astrophysics Data System (ADS)
Atakishiyev, Natig M.; Pogosyan, George S.; Wolf, Kurt Bernardo; Yakhno, Alexander
2017-10-01
The differential equation with free boundary conditions on the unit disk that was proposed by Frits Zernike in 1934 to find Jacobi polynomial solutions (indicated as I) serves to define a classical system and a quantum system which have been found to be superintegrable. We have determined two new orthogonal polynomial solutions (indicated as II and III) that are separable and involve Legendre and Gegenbauer polynomials. Here we report on their three interbasis expansion coefficients: between the I-II and I-III bases, they are given by F32(⋯|1 ) polynomials that are also special su(2) Clebsch-Gordan coefficients and Hahn polynomials. Between the II-III bases, we find an expansion expressed by F43(⋯|1 ) 's and Racah polynomials that are related to the Wigner 6j coefficients.
Zeros and logarithmic asymptotics of Sobolev orthogonal polynomials for exponential weights
NASA Astrophysics Data System (ADS)
Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.
2009-12-01
We obtain the (contracted) weak zero asymptotics for orthogonal polynomials with respect to Sobolev inner products with exponential weights in the real semiaxis, of the form , with [gamma]>0, which include as particular cases the counterparts of the so-called Freud (i.e., when [phi] has a polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) weights. In addition, the boundness of the distance of the zeros of these Sobolev orthogonal polynomials to the convex hull of the support and, as a consequence, a result on logarithmic asymptotics are derived.
Combinatorial theory of Macdonald polynomials I: proof of Haglund's formula.
Haglund, J; Haiman, M; Loehr, N
2005-02-22
Haglund recently proposed a combinatorial interpretation of the modified Macdonald polynomials H(mu). We give a combinatorial proof of this conjecture, which establishes the existence and integrality of H(mu). As corollaries, we obtain the cocharge formula of Lascoux and Schutzenberger for Hall-Littlewood polynomials, a formula of Sahi and Knop for Jack's symmetric functions, a generalization of this result to the integral Macdonald polynomials J(mu), a formula for H(mu) in terms of Lascoux-Leclerc-Thibon polynomials, and combinatorial expressions for the Kostka-Macdonald coefficients K(lambda,mu) when mu is a two-column shape.
Multi-indexed (q-)Racah polynomials
NASA Astrophysics Data System (ADS)
Odake, Satoru; Sasaki, Ryu
2012-09-01
As the second stage of the project multi-indexed orthogonal polynomials, we present, in the framework of ‘discrete quantum mechanics’ with real shifts in one dimension, the multi-indexed (q-)Racah polynomials. They are obtained from the (q-)Racah polynomials by the multiple application of the discrete analogue of the Darboux transformations or the Crum-Krein-Adler deletion of ‘virtual state’ vectors, in a similar way to the multi-indexed Laguerre and Jacobi polynomials reported earlier. The virtual state vectors are the ‘solutions’ of the matrix Schrödinger equation with negative ‘eigenvalues’, except for one of the two boundary points.
Dynamic analysis and control PID path of a model type gantry crane
NASA Astrophysics Data System (ADS)
Ospina-Henao, P. A.; López-Suspes, Framsol
2017-06-01
This paper presents an alternate form for the dynamic modelling of a mechanical system that simulates in real life a gantry crane type, using Euler’s classical mechanics and Lagrange formalism, which allows find the equations of motion that our model describe. Moreover, it has a basic model design system using the SolidWorks software, based on the material and dimensions of the model provides some physical variables necessary for modelling. In order to verify the theoretical results obtained, a contrast was made between solutions obtained by simulation in SimMechanics-Matlab and Euler-Lagrange equations system, has been solved through Matlab libraries for solving equation’s systems of the type and order obtained. The force is determined, but not as exerted by the spring, as this will be the control variable. The objective is to bring the mass of the pendulum from one point to another with a specified distance without the oscillation from it, so that, the answer is overdamped. This article includes an analysis of PID control in which the equations of motion of Euler-Lagrange are rewritten in the state space, once there, they were implemented in Simulink to get the natural response of the system to a step input in F and then draw the desired trajectories.
Conformal Galilei algebras, symmetric polynomials and singular vectors
NASA Astrophysics Data System (ADS)
Křižka, Libor; Somberg, Petr
2018-01-01
We classify and explicitly describe homomorphisms of Verma modules for conformal Galilei algebras cga_ℓ (d,C) with d=1 for any integer value ℓ \\in N. The homomorphisms are uniquely determined by singular vectors as solutions of certain differential operators of flag type and identified with specific polynomials arising as coefficients in the expansion of a parametric family of symmetric polynomials into power sum symmetric polynomials.
Identities associated with Milne-Thomson type polynomials and special numbers.
Simsek, Yilmaz; Cakic, Nenad
2018-01-01
The purpose of this paper is to give identities and relations including the Milne-Thomson polynomials, the Hermite polynomials, the Bernoulli numbers, the Euler numbers, the Stirling numbers, the central factorial numbers, and the Cauchy numbers. By using fermionic and bosonic p -adic integrals, we derive some new relations and formulas related to these numbers and polynomials, and also the combinatorial sums.
NASA Astrophysics Data System (ADS)
Charles, Alexandre; Ballard, Patrick
2016-08-01
The dynamics of mechanical systems with a finite number of degrees of freedom (discrete mechanical systems) is governed by the Lagrange equation which is a second-order differential equation on a Riemannian manifold (the configuration manifold). The handling of perfect (frictionless) unilateral constraints in this framework (that of Lagrange's analytical dynamics) was undertaken by Schatzman and Moreau at the beginning of the 1980s. A mathematically sound and consistent evolution problem was obtained, paving the road for many subsequent theoretical investigations. In this general evolution problem, the only reaction force which is involved is a generalized reaction force, consistently with the virtual power philosophy of Lagrange. Surprisingly, such a general formulation was never derived in the case of frictional unilateral multibody dynamics. Instead, the paradigm of the Coulomb law applying to reaction forces in the real world is generally invoked. So far, this paradigm has only enabled to obtain a consistent evolution problem in only some very few specific examples and to suggest numerical algorithms to produce computational examples (numerical modeling). In particular, it is not clear what is the evolution problem underlying the computational examples. Moreover, some of the few specific cases in which this paradigm enables to write down a precise evolution problem are known to show paradoxes: the Painlevé paradox (indeterminacy) and the Kane paradox (increase in kinetic energy due to friction). In this paper, we follow Lagrange's philosophy and formulate the frictional unilateral multibody dynamics in terms of the generalized reaction force and not in terms of the real-world reaction force. A general evolution problem that governs the dynamics is obtained for the first time. We prove that all the solutions are dissipative; that is, this new formulation is free of Kane paradox. We also prove that some indeterminacy of the Painlevé paradox is fixed in this formulation.
Lagrange constraint neural network for audio varying BSS
NASA Astrophysics Data System (ADS)
Szu, Harold H.; Hsu, Charles C.
2002-03-01
Lagrange Constraint Neural Network (LCNN) is a statistical-mechanical ab-initio model without assuming the artificial neural network (ANN) model at all but derived it from the first principle of Hamilton and Lagrange Methodology: H(S,A)= f(S)- (lambda) C(s,A(x,t)) that incorporates measurement constraint C(S,A(x,t))= (lambda) ([A]S-X)+((lambda) 0-1)((Sigma) isi -1) using the vector Lagrange multiplier-(lambda) and a- priori Shannon Entropy f(S) = -(Sigma) i si log si as the Contrast function of unknown number of independent sources si. Szu et al. have first solved in 1997 the general Blind Source Separation (BSS) problem for spatial-temporal varying mixing matrix for the real world remote sensing where a large pixel footprint implies the mixing matrix [A(x,t)] necessarily fill with diurnal and seasonal variations. Because the ground truth is difficult to be ascertained in the remote sensing, we have thus illustrated in this paper, each step of the LCNN algorithm for the simulated spatial-temporal varying BSS in speech, music audio mixing. We review and compare LCNN with other popular a-posteriori Maximum Entropy methodologies defined by ANN weight matrix-[W] sigmoid-(sigma) post processing H(Y=(sigma) ([W]X)) by Bell-Sejnowski, Amari and Oja (BSAO) called Independent Component Analysis (ICA). Both are mirror symmetric of the MaxEnt methodologies and work for a constant unknown mixing matrix [A], but the major difference is whether the ensemble average is taken at neighborhood pixel data X's in BASO or at the a priori sources S variables in LCNN that dictates which method works for spatial-temporal varying [A(x,t)] that would not allow the neighborhood pixel average. We expected the success of sharper de-mixing by the LCNN method in terms of a controlled ground truth experiment in the simulation of variant mixture of two music of similar Kurtosis (15 seconds composed of Saint-Saens Swan and Rachmaninov cello concerto).
Approximating smooth functions using algebraic-trigonometric polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharapudinov, Idris I
2011-01-14
The problem under consideration is that of approximating classes of smooth functions by algebraic-trigonometric polynomials of the form p{sub n}(t)+{tau}{sub m}(t), where p{sub n}(t) is an algebraic polynomial of degree n and {tau}{sub m}(t)=a{sub 0}+{Sigma}{sub k=1}{sup m}a{sub k} cos k{pi}t + b{sub k} sin k{pi}t is a trigonometric polynomial of order m. The precise order of approximation by such polynomials in the classes W{sup r}{sub {infinity}(}M) and an upper bound for similar approximations in the class W{sup r}{sub p}(M) with 4/3
Parameter reduction in nonlinear state-space identification of hysteresis
NASA Astrophysics Data System (ADS)
Fakhrizadeh Esfahani, Alireza; Dreesen, Philippe; Tiels, Koen; Noël, Jean-Philippe; Schoukens, Johan
2018-05-01
Recent work on black-box polynomial nonlinear state-space modeling for hysteresis identification has provided promising results, but struggles with a large number of parameters due to the use of multivariate polynomials. This drawback is tackled in the current paper by applying a decoupling approach that results in a more parsimonious representation involving univariate polynomials. This work is carried out numerically on input-output data generated by a Bouc-Wen hysteretic model and follows up on earlier work of the authors. The current article discusses the polynomial decoupling approach and explores the selection of the number of univariate polynomials with the polynomial degree. We have found that the presented decoupling approach is able to reduce the number of parameters of the full nonlinear model up to about 50%, while maintaining a comparable output error level.
Constructing general partial differential equations using polynomial and neural networks.
Zjavka, Ladislav; Pedrycz, Witold
2016-01-01
Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation
NASA Astrophysics Data System (ADS)
Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan
2018-01-01
It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.
Gradient-based interpolation method for division-of-focal-plane polarimeters.
Gao, Shengkui; Gruev, Viktor
2013-01-14
Recent advancements in nanotechnology and nanofabrication have allowed for the emergence of the division-of-focal-plane (DoFP) polarization imaging sensors. These sensors capture polarization properties of the optical field at every imaging frame. However, the DoFP polarization imaging sensors suffer from large registration error as well as reduced spatial-resolution output. These drawbacks can be improved by applying proper image interpolation methods for the reconstruction of the polarization results. In this paper, we present a new gradient-based interpolation method for DoFP polarimeters. The performance of the proposed interpolation method is evaluated against several previously published interpolation methods by using visual examples and root mean square error (RMSE) comparison. We found that the proposed gradient-based interpolation method can achieve better visual results while maintaining a lower RMSE than other interpolation methods under various dynamic ranges of a scene ranging from dim to bright conditions.
Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.
Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg
2009-07-01
In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.
Learning polynomial feedforward neural networks by genetic programming and backpropagation.
Nikolaev, N Y; Iba, H
2003-01-01
This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.
Quasi-kernel polynomials and convergence results for quasi-minimal residual iterations
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1992-01-01
Recently, Freund and Nachtigal have proposed a novel polynominal-based iteration, the quasi-minimal residual algorithm (QMR), for solving general nonsingular non-Hermitian linear systems. Motivated by the QMR method, we have introduced the general concept of quasi-kernel polynomials, and we have shown that the QMR algorithm is based on a particular instance of quasi-kernel polynomials. In this paper, we continue our study of quasi-kernel polynomials. In particular, we derive bounds for the norms of quasi-kernel polynomials. These results are then applied to obtain convergence theorems both for the QMR method and for a transpose-free variant of QMR, the TFQMR algorithm.
NASA Astrophysics Data System (ADS)
Mironov, A.; Mkrtchyan, R.; Morozov, A.
2016-02-01
We present a universal knot polynomials for 2- and 3-strand torus knots in adjoint representation, by universalization of appropriate Rosso-Jones formula. According to universality, these polynomials coincide with adjoined colored HOMFLY and Kauffman polynomials at SL and SO/Sp lines on Vogel's plane, respectively and give their exceptional group's counterparts on exceptional line. We demonstrate that [m,n]=[n,m] topological invariance, when applicable, take place on the entire Vogel's plane. We also suggest the universal form of invariant of figure eight knot in adjoint representation, and suggest existence of such universalization for any knot in adjoint and its descendant representations. Properties of universal polynomials and applications of these results are discussed.
Zernike Basis to Cartesian Transformations
NASA Astrophysics Data System (ADS)
Mathar, R. J.
2009-12-01
The radial polynomials of the 2D (circular) and 3D (spherical) Zernike functions are tabulated as powers of the radial distance. The reciprocal tabulation of powers of the radial distance in series of radial polynomials is also given, based on projections that take advantage of the orthogonality of the polynomials over the unit interval. They play a role in the expansion of products of the polynomials into sums, which is demonstrated by some examples. Multiplication of the polynomials by the angular bases (azimuth, polar angle) defines the Zernike functions, for which we derive transformations to and from the Cartesian coordinate system centered at the middle of the circle or sphere.
3-D Interpolation in Object Perception: Evidence from an Objective Performance Paradigm
ERIC Educational Resources Information Center
Kellman, Philip J.; Garrigan, Patrick; Shipley, Thomas F.; Yin, Carol; Machado, Liana
2005-01-01
Object perception requires interpolation processes that connect visible regions despite spatial gaps. Some research has suggested that interpolation may be a 3-D process, but objective performance data and evidence about the conditions leading to interpolation are needed. The authors developed an objective performance paradigm for testing 3-D…
Chaos, Fractals, and Polynomials.
ERIC Educational Resources Information Center
Tylee, J. Louis; Tylee, Thomas B.
1996-01-01
Discusses chaos theory; linear algebraic equations and the numerical solution of polynomials, including the use of the Newton-Raphson technique to find polynomial roots; fractals; search region and coordinate systems; convergence; and generating color fractals on a computer. (LRW)
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Borak, Jordan S.
2008-01-01
Many earth science modeling applications employ continuous input data fields derived from satellite data. Environmental factors, sensor limitations and algorithmic constraints lead to data products of inherently variable quality. This necessitates interpolation of one form or another in order to produce high quality input fields free of missing data. The present research tests several interpolation techniques as applied to satellite-derived leaf area index, an important quantity in many global climate and ecological models. The study evaluates and applies a variety of interpolation techniques for the Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf-Area Index Product over the time period 2001-2006 for a region containing the conterminous United States. Results indicate that the accuracy of an individual interpolation technique depends upon the underlying land cover. Spatial interpolation provides better results in forested areas, while temporal interpolation performs more effectively over non-forest cover types. Combination of spatial and temporal approaches offers superior interpolative capabilities to any single method, and in fact, generation of continuous data fields requires a hybrid approach such as this.
Universal Racah matrices and adjoint knot polynomials: Arborescent knots
NASA Astrophysics Data System (ADS)
Mironov, A.; Morozov, A.
2016-04-01
By now it is well established that the quantum dimensions of descendants of the adjoint representation can be described in a universal form, independent of a particular family of simple Lie algebras. The Rosso-Jones formula then implies a universal description of the adjoint knot polynomials for torus knots, which in particular unifies the HOMFLY (SUN) and Kauffman (SON) polynomials. For E8 the adjoint representation is also fundamental. We suggest to extend the universality from the dimensions to the Racah matrices and this immediately produces a unified description of the adjoint knot polynomials for all arborescent (double-fat) knots, including twist, 2-bridge and pretzel. Technically we develop together the universality and the "eigenvalue conjecture", which expresses the Racah and mixing matrices through the eigenvalues of the quantum R-matrix, and for dealing with the adjoint polynomials one has to extend it to the previously unknown 6 × 6 case. The adjoint polynomials do not distinguish between mutants and therefore are not very efficient in knot theory, however, universal polynomials in higher representations can probably be better in this respect.
Imaging characteristics of Zernike and annular polynomial aberrations.
Mahajan, Virendra N; Díaz, José Antonio
2013-04-01
The general equations for the point-spread function (PSF) and optical transfer function (OTF) are given for any pupil shape, and they are applied to optical imaging systems with circular and annular pupils. The symmetry properties of the PSF, the real and imaginary parts of the OTF, and the modulation transfer function (MTF) of a system with a circular pupil aberrated by a Zernike circle polynomial aberration are derived. The interferograms and PSFs are illustrated for some typical polynomial aberrations with a sigma value of one wave, and 3D PSFs and MTFs are shown for 0.1 wave. The Strehl ratio is also calculated for polynomial aberrations with a sigma value of 0.1 wave, and shown to be well estimated from the sigma value. The numerical results are compared with the corresponding results in the literature. Because of the same angular dependence of the corresponding annular and circle polynomial aberrations, the symmetry properties of systems with annular pupils aberrated by an annular polynomial aberration are the same as those for a circular pupil aberrated by a corresponding circle polynomial aberration. They are also illustrated with numerical examples.
Real-time Interpolation for True 3-Dimensional Ultrasound Image Volumes
Ji, Songbai; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.
2013-01-01
We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1–2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm3 voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery. PMID:21266563
Real-time interpolation for true 3-dimensional ultrasound image volumes.
Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D
2011-02-01
We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.
Directional sinogram interpolation for sparse angular acquisition in cone-beam computed tomography.
Zhang, Hua; Sonke, Jan-Jakob
2013-01-01
Cone-beam (CB) computed tomography (CT) is widely used in the field of medical imaging for guidance. Inspired by Betram's directional interpolation (BDI) methods, directional sinogram interpolation (DSI) was implemented to generate more CB projections by optimized (iterative) double-orientation estimation in sinogram space and directional interpolation. A new CBCT was subsequently reconstructed with the Feldkamp algorithm using both the original and interpolated CB projections. The proposed method was evaluated on both phantom and clinical data, and image quality was assessed by correlation ratio (CR) between the interpolated image and a gold standard obtained from full measured projections. Additionally, streak artifact reduction and image blur were assessed. In a CBCT reconstructed by 40 acquired projections over an arc of 360 degree, streak artifacts dropped 20.7% and 6.7% in a thorax phantom, when our method was compared to linear interpolation (LI) and BDI methods. Meanwhile, image blur was assessed by a head-and-neck phantom, where image blur of DSI was 20.1% and 24.3% less than LI and BDI. When our method was compared to LI and DI methods, CR increased by 4.4% and 3.1%. Streak artifacts of sparsely acquired CBCT were decreased by our method and image blur induced by interpolation was constrained to below other interpolation methods.
Ehrhardt, J; Säring, D; Handels, H
2007-01-01
Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.
NASA Astrophysics Data System (ADS)
Zhou, Rui-Rui; Li, Ben-Wen
2017-03-01
In this study, the Chebyshev collocation spectral method (CCSM) is developed to solve the radiative integro-differential transfer equation (RIDTE) for one-dimensional absorbing, emitting and linearly anisotropic-scattering cylindrical medium. The general form of quadrature formulas for Chebyshev collocation points is deduced. These formulas are proved to have the same accuracy as the Gauss-Legendre quadrature formula (GLQF) for the F-function (geometric function) in the RIDTE. The explicit expressions of the Lagrange basis polynomials and the differentiation matrices for Chebyshev collocation points are also given. These expressions are necessary for solving an integro-differential equation by the CCSM. Since the integrand in the RIDTE is continuous but non-smooth, it is treated by the segments integration method (SIM). The derivative terms in the RIDTE are carried out to improve the accuracy near the origin. In this way, a fourth order accuracy is achieved by the CCSM for the RIDTE, whereas it's only a second order one by the finite difference method (FDM). Several benchmark problems (BPs) with various combinations of optical thickness, medium temperature distribution, degree of anisotropy, and scattering albedo are solved. The results show that present CCSM is efficient to obtain high accurate results, especially for the optically thin medium. The solutions rounded to seven significant digits are given in tabular form, and show excellent agreement with the published data. Finally, the solutions of RIDTE are used as benchmarks for the solution of radiative integral transfer equations (RITEs) presented by Sutton and Chen (JQSRT 84 (2004) 65-103). A non-uniform grid refined near the wall is advised to improve the accuracy of RITEs solutions.
Optimizing Retransmission Threshold in Wireless Sensor Networks
Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang
2016-01-01
The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is OnΔ·max1≤i≤n{ui}, where ui is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based (1+pmin)-approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O(1)-approximation algorithm with time complexity O(1) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092
The Effect of Pulse Length and Ejector Radius on Unsteady Ejector Performance
NASA Technical Reports Server (NTRS)
Wilson, Jack
2005-01-01
The thrust augmentation of a set of ejectors driven by a shrouded Hartmann-Sprenger tube has been measured at four different frequencies. Each frequency corresponded to a different length to diameter ratio of the pulse of air leaving the driver shroud. Two of the frequencies had length to diameter ratios below the formation number, and two above. The formation number is the value of length to diameter ratio below which the pulse converts to a vortex ring only, and above which the pulse becomes a vortex ring plus a trailing jet. A three level, three parameter Box-Behnken statistical design of experiment scheme was performed at each frequency, measuring the thrust augmentation generated by the appropriate ejectors from the set. The three parameters were ejector length, radius, and inlet radius. The results showed that there is an optimum ejector radius and length at each frequency. Using a polynomial fit to the data, the results were interpolated to different ejector radii and pulse length to diameter ratios. This showed that a peak in thrust augmentation occurs when the pulse length to diameter ratio equals the formation number, and that the optimum ejector radius is 0.87 times the sum of the vortex ring radius and the core radius.
Numerically stable formulas for a particle-based explicit exponential integrator
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth
2015-05-01
Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.
Dsm Based Orientation of Large Stereo Satellite Image Blocks
NASA Astrophysics Data System (ADS)
d'Angelo, P.; Reinartz, P.
2012-07-01
High resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on CARTOSAT-1 imagery is presented, with emphasis on fully automated georeferencing. The proposed system processes level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The RPC are derived from orbit and attitude information and have a much lower accuracy than the ground resolution of approximately 2.5 m. In order to use the images for orthorectification or DSM generation, an affine RPC correction is required. In this paper, GCP are automatically derived from lower resolution reference datasets (Landsat ETM+ Geocover and SRTM DSM). The traditional method of collecting the lateral position from a reference image and interpolating the corresponding height from the DEM ignores the higher lateral accuracy of the SRTM dataset. Our method avoids this drawback by using a RPC correction based on DSM alignment, resulting in improved geolocation of both DSM and ortho images. Scene based method and a bundle block adjustment based correction are developed and evaluated for a test site covering the nothern part of Italy, for which 405 Cartosat-1 Stereopairs are available. Both methods are tested against independent ground truth. Checks against this ground truth indicate a lateral error of 10 meters.
Moussaoui, Ahmed; Bouziane, Touria
2016-01-01
The method LRPIM is a Meshless method with properties of simple implementation of the essential boundary conditions and less costly than the moving least squares (MLS) methods. This method is proposed to overcome the singularity associated to polynomial basis by using radial basis functions. In this paper, we will present a study of a 2D problem of an elastic homogenous rectangular plate by using the method LRPIM. Our numerical investigations will concern the influence of different shape parameters on the domain of convergence,accuracy and using the radial basis function of the thin plate spline. It also will presents a comparison between numerical results for different materials and the convergence domain by precising maximum and minimum values as a function of distribution nodes number. The analytical solution of the deflection confirms the numerical results. The essential points in the method are: •The LRPIM is derived from the local weak form of the equilibrium equations for solving a thin elastic plate.•The convergence of the LRPIM method depends on number of parameters derived from local weak form and sub-domains.•The effect of distributions nodes number by varying nature of material and the radial basis function (TPS).
NASA Astrophysics Data System (ADS)
Van Kha, Tran; Van Vuong, Hoang; Thanh, Do Duc; Hung, Duong Quoc; Anh, Le Duc
2018-05-01
The maximum horizontal gradient method was first proposed by Blakely and Simpson (1986) for determining the boundaries between geological bodies with different densities. The method involves the comparison of a center point with its eight nearest neighbors in four directions within each 3 × 3 calculation grid. The horizontal location and magnitude of the maximum values are found by interpolating a second-order polynomial through the trio of points provided that the magnitude of the middle point is greater than its two nearest neighbors in one direction. In theoretical models of multiple sources, however, the above condition does not allow the maximum horizontal locations to be fully located, and it could be difficult to correlate the edges of complicated sources. In this paper, the authors propose an additional condition to identify more maximum horizontal locations within the calculation grid. This additional condition will improve the method algorithm for interpreting the boundaries of magnetic and/or gravity sources. The improved algorithm was tested on gravity models and applied to gravity data for the Phu Khanh basin on the continental shelf of the East Vietnam Sea. The results show that the additional locations of the maximum horizontal gradient could be helpful for connecting the edges of complicated source bodies.