On Least Squares Fitting Nonlinear Submodels.
ERIC Educational Resources Information Center
Bechtel, Gordon G.
Three simplifying conditions are given for obtaining least squares (LS) estimates for a nonlinear submodel of a linear model. If these are satisfied, and if the subset of nonlinear parameters may be LS fit to the corresponding LS estimates of the linear model, then one attains the desired LS estimates for the entire submodel. Two illustrative…
Nonlinear Least Squares Curve Fitting with Microsoft Excel Solver
NASA Astrophysics Data System (ADS)
Harris, Daniel C.
1998-01-01
"Solver" is a powerful tool in the Microsoft Excel spreadsheet that provides a simple means of fitting experimental data to nonlinear functions. The procedure is so easy to use and its mode of operation is so obvious that it is excellent for students to learn the underlying principle of lease squares curve fitting. This article introduces the method of fitting nonlinear functions with Solver and extends the treatment to weighted least squares and to the estimation of uncertainties in the least-squares parameters.
Nonlinear least-squares data fitting in Excel spreadsheets.
Kemmer, Gerdi; Keller, Sandro
2010-02-01
We describe an intuitive and rapid procedure for analyzing experimental data by nonlinear least-squares fitting (NLSF) in the most widely used spreadsheet program. Experimental data in x/y form and data calculated from a regression equation are inputted and plotted in a Microsoft Excel worksheet, and the sum of squared residuals is computed and minimized using the Solver add-in to obtain the set of parameter values that best describes the experimental data. The confidence of best-fit values is then visualized and assessed in a generally applicable and easily comprehensible way. Every user familiar with the most basic functions of Excel will be able to implement this protocol, without previous experience in data fitting or programming and without additional costs for specialist software. The application of this tool is exemplified using the well-known Michaelis-Menten equation characterizing simple enzyme kinetics. Only slight modifications are required to adapt the protocol to virtually any other kind of dataset or regression equation. The entire protocol takes approximately 1 h.
Variable Transformation in Nonlinear Least Squares model Fitting
1981-07-01
Chemistry, Vol. 10, pp. 91-104, 1973. 11. H.J. Britt and R.H. Luecke , "The Estimation of Parameters in Nonlinear, Implicit Models", Technometrics, Vol...respect to the unknown C, 6, and K. This yields the following set of normal equations. 11 H.J, Britt and H.H. Lueake, "The Estimation of...Carbide Corporation Chemicals and Plastics ATTN: H.J. Britt P.O. Box 8361 Charleston, WV 25303 California Institute of Tech Guggenheim Aeronautical
AKLSQF - LEAST SQUARES CURVE FITTING
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
Deming's General Least Square Fitting
Rinard, Phillip
1992-02-18
DEM4-26 is a generalized least square fitting program based on Deming''s method. Functions built into the program for fitting include linear, quadratic, cubic, power, Howard''s, exponential, and Gaussian; others can easily be added. The program has the following capabilities: (1) entry, editing, and saving of data; (2) fitting of any of the built-in functions or of a user-supplied function; (3) plotting the data and fitted function on the display screen, with error limits if requested, and with the option of copying the plot to the printer; (4) interpolation of x or y values from the fitted curve with error estimates based on error limits selected by the user; and (5) plotting the residuals between the y data values and the fitted curve, with the option of copying the plot to the printer. If the plot is to be copied to a printer, GRAPHICS should be called from the operating system disk before the BASIC interpreter is loaded.
Least-Squares Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Kantak, Anil V.
1990-01-01
Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.
Nonlinear least squares and regularization
Berryman, J.G.
1996-04-01
A problem frequently encountered in the earth sciences requires deducing physical parameters of the system of interest from measurements of some other (hopefully) closely related physical quantity. The obvious example in seismology (either surface reflection seismology or crosswell seismic tomography) is the use of measurements of sound wave traveltime to deduce wavespeed distribution in the earth and then subsequently to infer the values of other physical quantities of interest such as porosity, water or oil saturation, permeability, etc. The author presents and discusses some general ideas about iterative nonlinear output least-squares methods. The main result is that, if it is possible to do forward modeling on a physical problem in a way that permits the output (i.e., the predicted values of some physical parameter that could be measured) and the first derivative of the same output with respect to the model parameters (whatever they may be) to be calculated numerically, then it is possible (at least in principle) to solve the inverse problem using the method described. The main trick learned in this analysis comes from the realization that the steps in the model updates may have to be quite small in some cases for the implied guarantees of convergence to be realized.
Estimating errors in least-squares fitting
NASA Technical Reports Server (NTRS)
Richter, P. H.
1995-01-01
While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.
BLS: Box-fitting Least Squares
NASA Astrophysics Data System (ADS)
Kovács, G.; Zucker, S.; Mazeh, T.
2016-07-01
BLS (Box-fitting Least Squares) is a box-fitting algorithm that analyzes stellar photometric time series to search for periodic transits of extrasolar planets. It searches for signals characterized by a periodic alternation between two discrete levels, with much less time spent at the lower level.
Least-squares fitting Gompertz curve
NASA Astrophysics Data System (ADS)
Jukic, Dragan; Kralik, Gordana; Scitovski, Rudolf
2004-08-01
In this paper we consider the least-squares (LS) fitting of the Gompertz curve to the given nonconstant data (pi,ti,yi), i=1,...,m, m≥3. We give necessary and sufficient conditions which guarantee the existence of the LS estimate, suggest a choice of a good initial approximation and give some numerical examples.
Least squares polynomial fits and their accuracy
NASA Technical Reports Server (NTRS)
Lear, W. M.
1977-01-01
Equations are presented which attempt to fit least squares polynomials to tables of date. It is concluded that much data are needed to reduce the measurement error standard deviation by a significant amount, however at certain points great accuracy is attained.
NASA Astrophysics Data System (ADS)
Sze, K. H.; Barsukov, I. L.; Roberts, G. C. K.
A procedure for quantitative evaluation of cross-peak volumes in spectra of any order of dimensions is described; this is based on a generalized algorithm for combining appropriate one-dimensional integrals obtained by nonlinear-least-squares curve-fitting techniques. This procedure is embodied in a program, NDVOL, which has three modes of operation: a fully automatic mode, a manual mode for interactive selection of fitting parameters, and a fast reintegration mode. The procedures used in the NDVOL program to obtain accurate volumes for overlapping cross peaks are illustrated using various simulated overlapping cross-peak patterns. The precision and accuracy of the estimates of cross-peak volumes obtained by application of the program to these simulated cross peaks and to a back-calculated 2D NOESY spectrum of dihydrofolate reductase are presented. Examples are shown of the use of the program with real 2D and 3D data. It is shown that the program is able to provide excellent estimates of volume even for seriously overlapping cross peaks with minimal intervention by the user.
Solution of Nonlinear Least-Squares Problems.
1987-07-01
Computation Building 460, Room 313 Stanford University Stanford, California 94305-2140 0 87 i4 3 4 SOLUTION OF NONLINEAR LEAST-SQUARES PROBLEMS A...Performance on a Well-Conditioned Zero -Residual Problem ............. 84 4.7 Num erical Results...termination conditions A superscspt o following a problem number indicates a zero -residual problem A superscipt following a problem number denotes a
Algorithms for Nonlinear Least-Squares Problems
1988-09-01
Newton meth - ods using these problems, and observes that the Jacobian is well-conditioned at every iteration. A difficulty with the Gauss-Newton method...QTf, (4.10) and ( TITTI + T T 21 )pI + (T’TI 2 + T2 T2 )p + ZTBp - (T1 , T2 )QTf. (4.11) 19 As in the earlier version, the term YTBp is ignored in...Algorithms", Math- ematical Programming 7 (1974) 351-367. Osborne, M. R., "Some Aspects of Non-linear Least Squares Calculations", in Numerical Meth - ods for
The Use of Non-Linear Least Squares Analysis.
ERIC Educational Resources Information Center
Copeland, Thomas G.
1984-01-01
Nonlinear least squares computer programs are extremely valuable in fitting complicated equations to experimental data. They are easy to use and free students and teachers from the tedium of trying to derive linearized forms to complicated equations. The use of these programs (available for most medium/large scale computers) is discussed. (JN)
Three dimensional least-squares fitting of ellipsoids and hyperboloids
NASA Astrophysics Data System (ADS)
Rahmadiantri, Elvira; Putri Lawiyuniarti, Made; Muchtadi-Alamsyah, Intan; Rachmaputri, Gantina
2017-09-01
Spatial continuity can be described as a variogram model that has an ellipsoid anisotropy. In previous research, two-dimensional least-square ellipse fitting method by Fitzgibbon, Pilu and Fisher has been applied to the analysis of spatial continuity for coal deposits. However, it is not easy to generalize their method to three-dimensional least-square ellipsoid fitting. In this research, we obtain a three-dimensional least-square fitting for ellipsoids and hyperboloids by generalizing two-dimensional least-square ellipse fitting method introduced by Gander, Golub and Strebel.
Analysis of Fluorescence Decay by the Nonlinear Least Squares Method
Periasamy, N.
1988-01-01
Fluorescence decay deconvolution analysis to fit a multiexponential function by the nonlinear least squares method requires numerical calculation of a convolution integral. A linear approximation of the successive data of the instrument response function is proposed for the computation of the convolution integral. Deconvolution analysis of simulated fluorescence data were carried out to show that the linear approximation method is generally better when one of the lifetimes is comparable to the time interval between data. PMID:19431735
A Global Least-Squares Fit for Absolute Zero
NASA Astrophysics Data System (ADS)
Salter, Carl
2003-09-01
A simple, nonlinear least-squares method is described that permits gas thermometry data to be fitted directly to absolute zero. This nonlinear method can be implemented using Solver in Excel, and unlike other linear methods previously reported, it is statistically sound. The Excel macro SolverAid can be used to compute the error in absolute zero. The method can be applied simultaneously to multiple sets of data, permitting a global value of absolute zero to be computed from different gas samples. Constant volume thermometry data for helium are used to illustrate the global fit to absolute zero using Solver in an Excel spreadsheet. The relationship between the global value of absolute zero and the values from the individual fits is analyzed.
Generalized least-squares fit of multiequation models
NASA Astrophysics Data System (ADS)
Marshall, Simon L.; Blencoe, James G.
2005-01-01
A method for fitting multiequation models to data sets of finite precision is proposed. This is based on the Gauss-Newton algorithm devised by Britt and Luecke (1973); the inclusion of several equations of condition to be satisfied at each data point results in a block diagonal form for the effective weighting matrix. This method allows generalized nonlinear least-squares fitting of functions that are more easily represented in the parametric form (x(t),y(t)) than as an explicit functional relationship of the form y=f(x). The Aitken (1935) formulas appropriate to multiequation weighted nonlinear least squares are recovered in the limiting case where the variances and covariances of the independent variables are zero. Practical considerations relevant to the performance of such calculations, such as the evaluation of the required partial derivatives and matrix products, are discussed in detail, and the operation of the algorithm is illustrated by applying it to the fit of complex permittivity data to the Debye equation.
Nonlinear Least-Squares Analysis of Stokes Profiles
NASA Astrophysics Data System (ADS)
Ceja, Jose A.; Walton, Stephen R.
2005-01-01
We describe improvements to the calibration and analysis of Fe I 6302.5 Å data from the San Fernando Observatory Video Spectra-Spectroheliograph (SFO-VSSHG). A more effective flat field is generated and the scales of the simultaneous Stokes measurements are made equal. Analysis of Stokes profiles is carried out using a moments technique but applied to nonlinear least-squares fits to the data. Typical spectropolarimetric results from the old and new methods are compared and discussed. Finally, we present asymmetry maps, a zero-crossing map, and a “relative-split” map extracted from the fits to the Stokes V profiles.
NASA Astrophysics Data System (ADS)
Benner, D. Chris; Devi, V. Malathy; Nugent, Emily; Brown, Linda R.; Miller, Charles E.; Toth, Robert A.; Sung, Keeyoon
2009-06-01
Room temperature spectra of carbon dioxide were obtained with the Fourier transform spectrometers at the National Solar Observatory's McMath-Pierce telescope and at the Jet Propulsion Laboratory. The multispectrum nonlinear least squares fitting technique is being used to derive accurate spectral line parameters for the strongest CO_2 bands in the 4700-4930 cm^{-1} spectral region. Positions of the spectral lines were constrained to their quantum mechanical relationships, and the rovibrational constants were derived directly from the fit. Similarly, the intensities of the lines within each of the rovibrational bands were constrained to their quantum mechanical relationships, and the band strength and Herman-Wallis coefficients were derived directly from the fit. These constraints even include a pair of interacting bands with the interaction coefficient derived directly using both the positions and intensities of the spectral lines. Room temperature self and air Lorentz halfwidth and pressure induced line shift coefficients are measured for most lines. Constraints upon the positions improve measurement of pressure-induced shifts, and constraints on the intensities improve the measurement of the Lorentz halfwidths. Line mixing and speed dependent line shapes are also required and characterized. D. Chris Benner, C.P. Rinsland, V. Malathy Devi, M.A.H. Smith, and D. Atkins, J. Quant. Spectrosc. Radiat. Transfer 53, 705-721 (1995)
NASA Astrophysics Data System (ADS)
Liu, Hongmei; Wei, Gaoling; Xu, Zhen; Liu, Peng; Li, Ying
2016-12-01
Quantitative analysis of Co and Fe using X-ray photoelectron spectroscopy (XPS) is of important for the evaluation of the catalytic ability of Co-substituted magnetite. However, the overlap of XPS peaks and Auger peaks for Co and Fe complicate quantification. In this study, non-linear least squares fitting (NLLSF) was used to calculate the relative Co and Fe contents of a series of synthesized Co-substituted magnetite samples with different Co doping levels. NLLSF separated the XPS peaks of Co 2p and Fe 2p from the Auger peaks of Fe and Co, respectively. Compared with a control group without fitting, the accuracy of quantification of Co and Fe was greatly improved after elimination by NLLSF of the disturbance of Auger peaks. A catalysis study confirmed that the catalytic activity of magnetite was enhanced with the increase of Co substitution. This study confirms the effectiveness and accuracy of the NLLSF method in XPS quantitative calculation of Fe and Co coexisting in a material.
A Genetic Algorithm Approach to Nonlinear Least Squares Estimation
ERIC Educational Resources Information Center
Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.
2004-01-01
A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…
A Genetic Algorithm Approach to Nonlinear Least Squares Estimation
ERIC Educational Resources Information Center
Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.
2004-01-01
A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…
Multisplitting for linear, least squares and nonlinear problems
Renaut, R.
1996-12-31
In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.
Kernel Partial Least Squares for Nonlinear Regression and Discrimination
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.
Software Performance on Nonlinear Least-Squares Problems
1989-01-01
Murray, and Wright [1981), Dennis and Schnabel (1983], and Mori and Sorensen [19841. Section 3 reviews the principal approaches that are used in software...2.3.1) where R1, is upper-triangular and nonsingular (see, e. g., Stewart [19731, Chapter 3 ). Gill and Murray alter the Cholesky factorization...problem, JTJo can be used as the initial estimate, provided the columns of Jo are linearly independent. 1 I 3 . Methods for Nonlinear Least Squares 3.1
Faraday rotation data analysis with least-squares elliptical fitting
White, Adam D.; McHale, G. Brent; Goerz, David A.; Speer, Ron D.
2010-10-15
A method of analyzing Faraday rotation data from pulsed magnetic field measurements is described. The method uses direct least-squares elliptical fitting to measured data. The least-squares fit conic parameters are used to rotate, translate, and rescale the measured data. Interpretation of the transformed data provides improved accuracy and time-resolution characteristics compared with many existing methods of analyzing Faraday rotation data. The method is especially useful when linear birefringence is present at the input or output of the sensing medium, or when the relative angle of the polarizers used in analysis is not aligned with precision; under these circumstances the method is shown to return the analytically correct input signal. The method may be pertinent to other applications where analysis of Lissajous figures is required, such as the velocity interferometer system for any reflector (VISAR) diagnostics. The entire algorithm is fully automated and requires no user interaction. An example of algorithm execution is shown, using data from a fiber-based Faraday rotation sensor on a capacitive discharge experiment.
NASA Technical Reports Server (NTRS)
Hays, J. R.
1969-01-01
Lumped parametric system models are simplified and computationally advantageous in the frequency domain of linear systems. Nonlinear least squares computer program finds the least square best estimate for any number of parameters in an arbitrarily complicated model.
STRITERFIT, a least-squares pharmacokinetic curve-fitting package using a programmable calculator.
Thornhill, D P; Schwerzel, E
1985-05-01
A program is described that permits iterative least-squares nonlinear regression fitting of polyexponential curves using the Hewlett Packard HP 41 CV programmable calculator. The program enables the analysis of pharmacokinetic drug level profiles with a high degree of precision. Up to 15 data pairs can be used, and initial estimates of curve parameters are obtained with a stripping procedure. Up to four exponential terms can be accommodated by the program, and there is the option of weighting data according to their reciprocals. Initial slopes cannot be forced through zero. The program may be interrupted at any time in order to examine convergence.
GaussFit - A system for least squares and robust estimation
NASA Technical Reports Server (NTRS)
Jefferys, W. H.; Fitzpatrick, M. J.; Mcarthur, B. E.
1988-01-01
GaussFit is a new computer program for solving least-squares and robust estimation problems. It has a number of unique features, including a complete programming language designed especially to formulate estimation problems, a built-in compiler and interpreter to support the programming language, and a built-in algebraic manipulator for calculating the required partial derivatives analytically. These features make GaussFit very easy to use, so that even complex problems can be set up and solved with minimal effort. GaussFit can correctly handle many cases of practical interest: nonlinear models, exact constraints, correlated observations, and models where the equations of condition contain more than one observed quantity. An experimental robust estimation capability is built into GaussFit so that data sets contaminated by outliers can be handled simply and efficiently.
ERIC Educational Resources Information Center
Ding, Cody S.; Davison, Mark L.
2010-01-01
Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…
Using R^2 to compare least-squares fit models: When it must fail
USDA-ARS?s Scientific Manuscript database
R^2 can be used correctly to select from among competing least-squares fit models when the data are fitted in common form and with common weighting. However, then R^2 comparisons become equivalent to comparisons of the estimated fit variance s^2 in unweighted fitting, or of the reduced chi-square in...
Jiang, Kuosheng.; Xu, Guanghua.; Liang, Lin.; Tao, Tangfei.; Gu, Fengshou.
2014-01-01
In this paper a stochastic resonance (SR)-based method for recovering weak impulsive signals is developed for quantitative diagnosis of faults in rotating machinery. It was shown in theory that weak impulsive signals follow the mechanism of SR, but the SR produces a nonlinear distortion of the shape of the impulsive signal. To eliminate the distortion a moving least squares fitting method is introduced to reconstruct the signal from the output of the SR process. This proposed method is verified by comparing its detection results with that of a morphological filter based on both simulated and experimental signals. The experimental results show that the background noise is suppressed effectively and the key features of impulsive signals are reconstructed with a good degree of accuracy, which leads to an accurate diagnosis of faults in roller bearings in a run-to failure test. PMID:25076220
Radiographic least-squares fitting technique accurately measures dimensions and x-ray attenuation
Kelley, T.A.; Stupin, D.M.
1997-10-01
In support of stockpile stewardship and other important nondestructive test (NDT) applications, the authors seek improved methods for rapid evaluation of materials to detect degradation, warping, and shrinkage. Typically, such tests involve manual measurements of dimensions on radiographs. The authors seek to speed the process and reduce the costs of performing NDT by analyzing radiographic data using a least-squares fitting technique for rapid evaluation of industrial parts. In 1985, Whitman, Hanson, and Mueller demonstrated a least-squares fitting technique that very accurately locates the edges of cylindrically symmetrical objects in radiographs. To test the feasibility of applying this technique to a large number of parts, the authors examine whether an automated least squares algorithm can be routinely used for measuring the dimensions and attenuations of materials in two nested cylinders. The proposed technique involves making digital radiographs of the cylinders and analyzing the images. In the authors` preliminary study, however, they use computer simulations of radiographs.
Nonlinear Least-Squares Time-Difference Estimation from Sub-Nyquist-Rate Samples
NASA Astrophysics Data System (ADS)
Harada, Koji; Sakai, Hideaki
In this paper, time-difference estimation of filtered random signals passed through multipath channels is discussed. First, we reformulate the approach based on innovation-rate sampling (IRS) to fit our random signal model, then use the IRS results to drive the nonlinear least-squares (NLS) minimization algorithm. This hybrid approach (referred to as the IRS-NLS method) provides consistent estimates even for cases with sub-Nyquist sampling assuming the use of compactly-supported sampling kernels that satisfies the recently-developed nonaliasing condition in the frequency domain. Numerical simulations show that the proposed NLS-IRS method can improve performance over the straight-forward IRS method, and provides approximately the same performance as the NLS method with reduced sampling rate, even for closely-spaced time delays. This enables, given a fixed observation time, significant reduction in the required number of samples, while maintaining the same level of estimation performance.
Kazemi, Mahdi; Arefi, Mohammad Mehdi
2017-03-01
In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used.
Representing Topography with Second-Degree Bivariate Polynomial Functions Fitted by Least Squares.
ERIC Educational Resources Information Center
Neuman, Arthur Edward
1987-01-01
There is a need for abstracting topography other than for mapping purposes. The method employed should be simple and available to non-specialists, thereby ruling out spline representations. Generalizing from univariate first-degree least squares and from multiple regression, this article introduces bivariate polynomial functions fitted by least…
Representing Topography with Second-Degree Bivariate Polynomial Functions Fitted by Least Squares.
ERIC Educational Resources Information Center
Neuman, Arthur Edward
1987-01-01
There is a need for abstracting topography other than for mapping purposes. The method employed should be simple and available to non-specialists, thereby ruling out spline representations. Generalizing from univariate first-degree least squares and from multiple regression, this article introduces bivariate polynomial functions fitted by least…
Geometry of nonlinear least squares with applications to sloppy models and optimization.
Transtrum, Mark K; Machta, Benjamin B; Sethna, James P
2011-03-01
Parameter estimation by nonlinear least-squares minimization is a common problem that has an elegant geometric interpretation: the possible parameter values of a model induce a manifold within the space of data predictions. The minimization problem is then to find the point on the manifold closest to the experimental data. We show that the model manifolds of a large class of models, known as sloppy models, have many universal features; they are characterized by a geometric series of widths, extrinsic curvatures, and parameter-effect curvatures, which we describe as a hyper-ribbon. A number of common difficulties in optimizing least-squares problems are due to this common geometric structure. First, algorithms tend to run into the boundaries of the model manifold, causing parameters to diverge or become unphysical before they have been optimized. We introduce the model graph as an extension of the model manifold to remedy this problem. We argue that appropriate priors can remove the boundaries and further improve the convergence rates. We show that typical fits will have many evaporated parameters unless the data are very accurately known. Second, "bare" model parameters are usually ill-suited to describing model behavior; cost contours in parameter space tend to form hierarchies of plateaus and long narrow canyons. Geometrically, we understand this inconvenient parametrization as an extremely skewed coordinate basis and show that it induces a large parameter-effect curvature on the manifold. By constructing alternative coordinates based on geodesic motion, we show that these long narrow canyons are transformed in many cases into a single quadratic, isotropic basin. We interpret the modified Gauss-Newton and Levenberg-Marquardt fitting algorithms as an Euler approximation to geodesic motion in these natural coordinates on the model manifold and the model graph, respectively. By adding a geodesic acceleration adjustment to these algorithms, we alleviate the
Geometry of nonlinear least squares with applications to sloppy models and optimization
NASA Astrophysics Data System (ADS)
Transtrum, Mark K.; Machta, Benjamin B.; Sethna, James P.
2011-03-01
Parameter estimation by nonlinear least-squares minimization is a common problem that has an elegant geometric interpretation: the possible parameter values of a model induce a manifold within the space of data predictions. The minimization problem is then to find the point on the manifold closest to the experimental data. We show that the model manifolds of a large class of models, known as sloppy models, have many universal features; they are characterized by a geometric series of widths, extrinsic curvatures, and parameter-effect curvatures, which we describe as a hyper-ribbon. A number of common difficulties in optimizing least-squares problems are due to this common geometric structure. First, algorithms tend to run into the boundaries of the model manifold, causing parameters to diverge or become unphysical before they have been optimized. We introduce the model graph as an extension of the model manifold to remedy this problem. We argue that appropriate priors can remove the boundaries and further improve the convergence rates. We show that typical fits will have many evaporated parameters unless the data are very accurately known. Second, “bare” model parameters are usually ill-suited to describing model behavior; cost contours in parameter space tend to form hierarchies of plateaus and long narrow canyons. Geometrically, we understand this inconvenient parametrization as an extremely skewed coordinate basis and show that it induces a large parameter-effect curvature on the manifold. By constructing alternative coordinates based on geodesic motion, we show that these long narrow canyons are transformed in many cases into a single quadratic, isotropic basin. We interpret the modified Gauss-Newton and Levenberg-Marquardt fitting algorithms as an Euler approximation to geodesic motion in these natural coordinates on the model manifold and the model graph, respectively. By adding a geodesic acceleration adjustment to these algorithms, we alleviate the
Rathbun, R.E.; Tai, D.Y.
1984-01-01
A nonlinear least squares procedure and a log transformation procedure for calculating first-order rate coefficients from experimental concentration-versus-time data were compared using laboratory measurements of the volatilization from water of 1,1,1-trichloroethane and 1,2-dichloroethane and the absorption of oxygen by water. Ratios of the nonlinear least squares to log transformation volatilization and absorption coefficients for 77 tests ranged from 0.955 to 1.08 and averaged 1.01. Comparison of the maximum, minimum, and mean root-mean-square errors of prediction for six sets of coefficients showed that the errors for the nonlinear least squares procedure were almost always smaller than the errors for the log transformation procedure.
A quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints
NASA Technical Reports Server (NTRS)
Hanson, R. J.; Krogh, Fred T.
1992-01-01
A new algorithm for solving nonlinear least-squares and nonlinear equation problems is proposed which is based on approximating the nonlinear functions using the quadratic-tensor model by Schnabel and Frank. The algorithm uses a trust region defined by a box containing the current values of the unknowns. The algorithm is found to be effective for problems with linear constraints and dense Jacobian matrices.
A quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints
NASA Technical Reports Server (NTRS)
Hanson, R. J.; Krogh, Fred T.
1992-01-01
A new algorithm for solving nonlinear least-squares and nonlinear equation problems is proposed which is based on approximating the nonlinear functions using the quadratic-tensor model by Schnabel and Frank. The algorithm uses a trust region defined by a box containing the current values of the unknowns. The algorithm is found to be effective for problems with linear constraints and dense Jacobian matrices.
NASA Astrophysics Data System (ADS)
Golub, Gene; Pereyra, Victor
2003-04-01
In this paper we review 30 years of developments and applications of the variable projection method for solving separable nonlinear least-squares problems. These are problems for which the model function is a linear combination of nonlinear functions. Taking advantage of this special structure, the method of variable projections eliminates the linear variables obtaining a somewhat more complicated function that involves only the nonlinear parameters. This procedure not only reduces the dimension of the parameter space but also results in a better-conditioned problem. The same optimization method applied to the original and reduced problems will always converge faster for the latter. We present first a historical account of the basic theoretical work and its various computer implementations, and then report on a variety of applications from electrical engineering, medical and biological imaging, chemistry, robotics, vision, and environmental sciences. An extensive bibliography is included. The method is particularly well suited for solving real and complex exponential model fitting problems, which are pervasive in their applications and are notoriously hard to solve.
Precision PEP-II optics measurement with an SVD-enhanced Least-Square fitting
Yan, Y
2004-09-15
A Singular value decomposition (SVD)-enhanced Least-square fitting technique is discussed. By automatic identifying, ordering, and selecting dominant SVD modes of the derivative matrix that responds to the variations of the variables, the converging process of the Least Square fitting is significantly enhanced. Thus the fitting speed can be fast enough for a fairly large system. This technique has been successfully applied to precision PEP-II optics measurement in which we determine all quadrupole strengths (both normal and skew components) and sextupole feed-downs as well as all BPM gains and BPM cross-plane couplings through Least-Square fitting of the phase advances and the Local Green's functions as well as the coupling ellipses among BPMs. The local Green's functions are specified by 4 local transfer matrix components R12, R34, R32, R14. These measurable quantities (the Green's functions, the phase advances and the coupling ellipse tilt angles and axis ratios) are obtained by analyzing turn-by-turn Beam Position Monitor (BPM) data with a high-resolution model-independent analysis (MIA). Once all of the quadrupoles and sextupole feed-downs are determined, we obtain a computer virtual accelerator which matches the real accelerator in linear optics. Thus, beta functions, linear coupling parameters, and interaction point (IP) optics characteristics can be measured and displayed.
A new algorithm for constrained nonlinear least-squares problems, part 1
NASA Technical Reports Server (NTRS)
Hanson, R. J.; Krogh, F. T.
1983-01-01
A Gauss-Newton algorithm is presented for solving nonlinear least squares problems. The problem statement may include simple bounds or more general constraints on the unknowns. The algorithm uses a trust region that allows the objective function to increase with logic for retreating to best values. The computations for the linear problem are done using a least squares system solver that allows for simple bounds and linear constraints. The trust region limits are defined by a box around the current point. In its current form the algorithm is effective only for problems with small residuals, linear constraints and dense Jacobian matrices. Results on a set of test problems are encouraging.
Optimal Knot Selection for Least-squares Fitting of Noisy Data with Spline Functions
Jerome Blair
2008-05-15
An automatic data-smoothing algorithm for data from digital oscilloscopes is described. The algorithm adjusts the bandwidth of the filtering as a function of time to provide minimum mean squared error at each time. It produces an estimate of the root-mean-square error as a function of time and does so without any statistical assumptions about the unknown signal. The algorithm is based on least-squares fitting to the data of cubic spline functions.
Least squares reconstruction of non-linear RF phase encoded MR data.
Salajeghe, Somaie; Babyn, Paul; Sharp, Jonathan C; Sarty, Gordon E
2016-09-01
The numerical feasibility of reconstructing MRI signals generated by RF coils that produce B1 fields with a non-linearly varying spatial phase is explored. A global linear spatial phase variation of B1 is difficult to produce from current confined to RF coils. Here we use regularized least squares inversion, in place of the usual Fourier transform, to reconstruct signals generated in B1 fields with non-linear phase variation. RF encoded signals were simulated for three RF coil configurations: ideal linear, parallel conductors and, circular coil pairs. The simulated signals were reconstructed by Fourier transform and by regularized least squares. The Fourier reconstruction of simulated RF encoded signals from the parallel conductor coil set showed minor distortions over the reconstruction of signals from the ideal linear coil set but the Fourier reconstruction of signals from the circular coil set produced severe geometric distortion. Least squares inversion in all cases produced reconstruction errors comparable to the Fourier reconstruction of the simulated signal from the ideal linear coil set. MRI signals encoded in B1 fields with non-linearly varying spatial phase may be accurately reconstructed using regularized least squares thus pointing the way to the use of simple RF coil designs for RF encoded MRI. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Least-Squares, Continuous Sensitivity Analysis for Nonlinear Fluid-Structure Interaction
2009-08-20
solving systems of partial differential equations are well established. A least-squares fit was first famously used by Gauss in 1801 to predict the...the problem for a desired angular deflection of the tip, tip , for a given freestream velocity through 2 1i tip q c LEI
Fitting of dihedral terms in classical force fields as an analytic linear least-squares problem.
Hopkins, Chad W; Roitberg, Adrian E
2014-07-28
The derivation and optimization of most energy terms in modern force fields are aided by automated computational tools. It is therefore important to have algorithms to rapidly and precisely train large numbers of interconnected parameters to allow investigators to make better decisions about the content of molecular models. In particular, the traditional approach to deriving dihedral parameters has been a least-squares fit to target conformational energies through variational optimization strategies. We present a computational approach for simultaneously fitting force field dihedral amplitudes and phase constants which is analytic within the scope of the data set. This approach completes the optimal molecular mechanics representation of a quantum mechanical potential energy surface in a single linear least-squares fit by recasting the dihedral potential into a linear function in the parameters. We compare the resulting method to a genetic algorithm in terms of computational time and quality of fit for two simple molecules. As suggested in previous studies, arbitrary dihedral phases are only necessary when modeling chiral molecules, which include more than half of drugs currently in use, so we also examined a dihedral parametrization case for the drug amoxicillin and one of its stereoisomers where the target dihedral includes a chiral center. Asymmetric dihedral phases are needed in these types of cases to properly represent the quantum mechanical energy surface and to differentiate between stereoisomers about the chiral center.
Optimization of Active Muscle Force-Length Models Using Least Squares Curve Fitting.
Mohammed, Goran Abdulrahman; Hou, Ming
2016-03-01
The objective of this paper is to propose an asymmetric Gaussian function as an alternative to the existing active force-length models, and to optimize this model along with several other existing models by using the least squares curve fitting method. The minimal set of coefficients is identified for each of these models to facilitate the least squares curve fitting. Sarcomere simulated data and one set of rabbits extensor digitorum II experimental data are used to illustrate optimal curve fitting of the selected force-length functions. The results shows that all the curves fit reasonably well with the simulated and experimental data, while the Gordon-Huxley-Julian model and asymmetric Gaussian function are better than other functions in terms of statistical test scores root mean squared error and R-squared. However, the differences in RMSE scores are insignificant (0.3-6%) for simulated data and (0.2-5%) for experimental data. The proposed asymmetric Gaussian model and the method of parametrization of this and the other force-length models mentioned above can be used in the studies on active force-length relationships of skeletal muscles that generate forces to cause movements of human and animal bodies.
Potential problems with the use of least squares spline fits to filter CO sub 2 data
Enting, I.G. )
1986-05-20
In a number of studies, various workers have used the procedure of fitting cubic splines with nodes spaced 12 months apart in order to remove seasonal effects from atmospheric CO{sub 2} observations. A number of trial examples of least squares spine fits are presented, indicating that this procedure can be subject to relatively large end effects. For variations with periods near 2 years the response is extremely sensitive too the position of the nodes and shows beating effects caused by the change in the relative positions of the signal peaks and the spline nodes. A signal whose period is exactly twice the node spacing can either be passed almost unchanged or be completely filtered out, depending on its phase relative to the nodes of the spline. These properties make such spline fitting unsuitable for analyzing the interannual variations in CO{sub 2} data.
NASA Astrophysics Data System (ADS)
Tang, Chen; Wang, Wenping; Yan, Haiqing; Gu, Xiaohui
2007-05-01
An efficient method is proposed to reduce the noise from electrical speckle pattern interferometry (ESPI) phase fringe patterns obtained by any technique. We establish the filtering windows along the tangent direction of phase fringe patterns. The x and y coordinates of each point in the established filtering windows are defined as the sine and cosine of the half-wrapped phase multiplied by a random quantity, then phase value is calculated using these points' coordinates based on a least-squares fitting algorithm. We tested the proposed methods on the computer-simulated speckle phase fringe patterns and the experimentally obtained phase fringe pattern, respectively, and compared them with the improved sine/cosine average filtering method [Opt. Commun. 162, 205 (1999)] and the least-squares phase-fitting method [Opt. Lett. 20, 931 (1995)], which may be the most efficient methods. In all cases, our results are even better than the ones obtained with the two methods. Our method can overcome the main disadvantages encountered by the two methods.
Kirchhoff, William H.
2012-09-15
The extended logistic function provides a physically reasonable description of interfaces such as depth profiles or line scans of surface topological or compositional features. It describes these interfaces with the minimum number of parameters, namely, position, width, and asymmetry. Logistic Function Profile Fit (LFPF) is a robust, least-squares fitting program in which the nonlinear extended logistic function is linearized by a Taylor series expansion (equivalent to a Newton-Raphson approach) with no apparent introduction of bias in the analysis. The program provides reliable confidence limits for the parameters when systematic errors are minimal and provides a display of the residuals from the fit for the detection of systematic errors. The program will aid researchers in applying ASTM E1636-10, 'Standard practice for analytically describing sputter-depth-profile and linescan-profile data by an extended logistic function,' and may also prove useful in applying ISO 18516: 2006, 'Surface chemical analysis-Auger electron spectroscopy and x-ray photoelectron spectroscopy-determination of lateral resolution.' Examples are given of LFPF fits to a secondary ion mass spectrometry depth profile, an Auger surface line scan, and synthetic data generated to exhibit known systematic errors for examining the significance of such errors to the extrapolation of partial profiles.
NASA Technical Reports Server (NTRS)
Ronan, R. S.; Mickey, D. L.; Orrall, F. Q.
1987-01-01
The results of two methods for deriving photospheric vector magnetic fields from the Zeeman effect, as observed in the Fe I line at 6302.5 A at high spectral resolution (45 mA), are compared. The first method does not take magnetooptical effects into account, but determines the vector magnetic field from the integral properties of the Stokes profiles. The second method is an iterative least-squares fitting technique which fits the observed Stokes profiles to the profiles predicted by the Unno-Rachkovsky solution to the radiative transfer equation. For sunspot fields above about 1500 gauss, the two methods are found to agree in derived azimuthal and inclination angles to within about + or - 20 deg.
NASA Technical Reports Server (NTRS)
Ronan, R. S.; Mickey, D. L.; Orrall, F. Q.
1987-01-01
The results of two methods for deriving photospheric vector magnetic fields from the Zeeman effect, as observed in the Fe I line at 6302.5 A at high spectral resolution (45 mA), are compared. The first method does not take magnetooptical effects into account, but determines the vector magnetic field from the integral properties of the Stokes profiles. The second method is an iterative least-squares fitting technique which fits the observed Stokes profiles to the profiles predicted by the Unno-Rachkovsky solution to the radiative transfer equation. For sunspot fields above about 1500 gauss, the two methods are found to agree in derived azimuthal and inclination angles to within about + or - 20 deg.
Error Estimates Derived from the Data for Least-Squares Spline Fitting
Jerome Blair
2007-06-25
The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.
Melgaard, David K; Haaland, David M
2004-09-01
Comparisons of prediction models from the new augmented classical least squares (ACLS) and partial least squares (PLS) multivariate spectral analysis methods were conducted using simulated data containing deviations from the idealized model. The simulated data were based on pure spectral components derived from real near-infrared spectra of multicomponent dilute aqueous solutions. Simulated uncorrelated concentration errors, uncorrelated and correlated spectral noise, and nonlinear spectral responses were included to evaluate the methods on situations representative of experimental data. The statistical significance of differences in prediction ability was evaluated using the Wilcoxon signed rank test. The prediction differences were found to be dependent on the type of noise added, the numbers of calibration samples, and the component being predicted. For analyses applied to simulated spectra with noise-free nonlinear response, PLS was shown to be statistically superior to ACLS for most of the cases. With added uncorrelated spectral noise, both methods performed comparably. Using 50 calibration samples with simulated correlated spectral noise, PLS showed an advantage in 3 out of 9 cases, but the advantage dropped to 1 out of 9 cases with 25 calibration samples. For cases with different noise distributions between calibration and validation, ACLS predictions were statistically better than PLS for two of the four components. Also, when experimentally derived correlated spectral error was added, ACLS gave better predictions that were statistically significant in 15 out of 24 cases simulated. On data sets with nonuniform noise, neither method was statistically better, although ACLS usually had smaller standard errors of prediction (SEPs). The varying results emphasize the need to use realistic simulations when making comparisons between various multivariate calibration methods. Even when the differences between the standard error of predictions were statistically
NASA Astrophysics Data System (ADS)
Di, Jianglei; Zhao, Jianlin; Sun, Weiwei; Jiang, Hongzhen; Yan, Xiaobo
2009-10-01
Digital holographic microscopy allows the numerical reconstruction of the complex wavefront of samples, especially biological samples such as living cells. In digital holographic microscopy, a microscope objective is introduced to improve the transverse resolution of the sample; however a phase aberration in the object wavefront is also brought along, which will affect the phase distribution of the reconstructed image. We propose here a numerical method to compensate for the phase aberration of thin transparent objects with a single hologram. The least squares surface fitting with points number less than the matrix of the original hologram is performed on the unwrapped phase distribution to remove the unwanted wavefront curvature. The proposed method is demonstrated with the samples of the cicada wings and epidermal cells of garlic, and the experimental results are consistent with that of the double exposure method.
A nonlinear quality-related fault detection approach based on modified kernel partial least squares.
Jiao, Jianfang; Zhao, Ning; Wang, Guang; Yin, Shen
2017-01-01
In this paper, a new nonlinear quality-related fault detection method is proposed based on kernel partial least squares (KPLS) model. To deal with the nonlinear characteristics among process variables, the proposed method maps these original variables into feature space in which the linear relationship between kernel matrix and output matrix is realized by means of KPLS. Then the kernel matrix is decomposed into two orthogonal parts by singular value decomposition (SVD) and the statistics for each part are determined appropriately for the purpose of quality-related fault detection. Compared with relevant existing nonlinear approaches, the proposed method has the advantages of simple diagnosis logic and stable performance. A widely used literature example and an industrial process are used for the performance evaluation for the proposed method.
Metafitting: Weight optimization for least-squares fitting of PTTI data
NASA Technical Reports Server (NTRS)
Douglas, Rob J.; Boulanger, J.-S.
1995-01-01
For precise time intercomparisons between a master frequency standard and a slave time scale, we have found it useful to quantitatively compare different fitting strategies by examining the standard uncertainty in time or average frequency. It is particularly useful when designing procedures which use intermittent intercomparisons, with some parameterized fit used to interpolate or extrapolate from the calibrating intercomparisons. We use the term 'metafitting' for the choices that are made before a fitting procedure is operationally adopted. We present methods for calculating the standard uncertainty for general, weighted least-squares fits and a method for optimizing these weights for a general noise model suitable for many PTTI applications. We present the results of the metafitting of procedures for the use of a regular schedule of (hypothetical) high-accuracy frequency calibration of a maser time scale. We have identified a cumulative series of improvements that give a significant reduction of the expected standard uncertainty, compared to the simplest procedure of resetting the maser synthesizer after each calibration. The metafitting improvements presented include the optimum choice of weights for the calibration runs, optimized over a period of a week or 10 days.
AVIRIS study of Death Valley evaporite deposits using least-squares band-fitting methods
NASA Technical Reports Server (NTRS)
Crowley, J. K.; Clark, R. N.
1992-01-01
Minerals found in playa evaporite deposits reflect the chemically diverse origins of ground waters in arid regions. Recently, it was discovered that many playa minerals exhibit diagnostic visible and near-infrared (0.4-2.5 micron) absorption bands that provide a remote sensing basis for observing important compositional details of desert ground water systems. The study of such systems is relevant to understanding solute acquisition, transport, and fractionation processes that are active in the subsurface. Observations of playa evaporites may also be useful for monitoring the hydrologic response of desert basins to changing climatic conditions on regional and global scales. Ongoing work using Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data to map evaporite minerals in the Death Valley salt pan is described. The AVIRIS data point to differences in inflow water chemistry in different parts of the Death Valley playa system and have led to the discovery of at least two new North American mineral occurrences. Seven segments of AVIRIS data were acquired over Death Valley on 31 July 1990, and were calibrated to reflectance by using the spectrum of a uniform area of alluvium near the salt pan. The calibrated data were subsequently analyzed by using least-squares spectral band-fitting methods, first described by Clark and others. In the band-fitting procedure, AVIRIS spectra are fit compared over selected wavelength intervals to a series of library reference spectra. Output images showing the degree of fit, band depth, and fit times the band depth are generated for each reference spectrum. The reference spectra used in the study included laboratory data for 35 pure evaporite spectra extracted from the AVIRIS image cube. Additional details of the band-fitting technique are provided by Clark and others elsewhere in this volume.
Sim, K S; Norhisham, S
2016-11-01
A new method based on nonlinear least squares regression (NLLSR) is formulated to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. The estimation of SNR value based on NLLSR method is compared with the three existing methods of nearest neighbourhood, first-order interpolation and the combination of both nearest neighbourhood and first-order interpolation. Samples of SEM images with different textures, contrasts and edges were used to test the performance of NLLSR method in estimating the SNR values of the SEM images. It is shown that the NLLSR method is able to produce better estimation accuracy as compared to the other three existing methods. According to the SNR results obtained from the experiment, the NLLSR method is able to produce approximately less than 1% of SNR error difference as compared to the other three existing methods. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
SOM-based nonlinear least squares twin SVM via active contours for noisy image segmentation
NASA Astrophysics Data System (ADS)
Xie, Xiaomin; Wang, Tingting
2017-02-01
In this paper, a nonlinear least square twin support vector machine (NLSTSVM) with the integration of active contour model (ACM) is proposed for noisy image segmentation. Efforts have been made to seek the kernel-generated surfaces instead of hyper-planes for the pixels belonging to the foreground and background, respectively, using the kernel trick to enhance the performance. The concurrent self organizing maps (SOMs) are applied to approximate the intensity distributions in a supervised way, so as to establish the original training sets for the NLSTSVM. Further, the two sets are updated by adding the global region average intensities at each iteration. Moreover, a local variable regional term rather than edge stop function is adopted in the energy function to ameliorate the noise robustness. Experiment results demonstrate that our model holds the higher segmentation accuracy and more noise robustness.
A Nonlinear Least Squares Approach to Time of Death Estimation Via Body Cooling.
Rodrigo, Marianito R
2016-01-01
The problem of time of death (TOD) estimation by body cooling is revisited by proposing a nonlinear least squares approach that takes as input a series of temperature readings only. Using a reformulation of the Marshall-Hoare double exponential formula and a technique for reducing the dimension of the state space, an error function that depends on the two cooling rates is constructed, with the aim of minimizing this function. Standard nonlinear optimization methods that are used to minimize the bivariate error function require an initial guess for these unknown rates. Hence, a systematic procedure based on the given temperature data is also proposed to determine an initial estimate for the rates. Then, an explicit formula for the TOD is given. Results of numerical simulations using both theoretical and experimental data are presented, both yielding reasonable estimates. The proposed procedure does not require knowledge of the temperature at death nor the body mass. In fact, the method allows the estimation of the temperature at death once the cooling rates and the TOD have been calculated. The procedure requires at least three temperature readings, although more measured readings could improve the estimates. With the aid of computerized recording and thermocouple detectors, temperature readings spaced 10-15 min apart, for example, can be taken. The formulas can be straightforwardly programmed and installed on a hand-held device for field use.
Blumberg, L.N.
1992-03-01
The authors have analyzed simulated magnetic measurements data for the SXLS bending magnet in a plane perpendicular to the reference axis at the magnet midpoint by fitting the data to an expansion solution of the 3-dimensional Laplace equation in curvilinear coordinates as proposed by Brown and Servranckx. The method of least squares is used to evaluate the expansion coefficients and their uncertainties, and compared to results from an FFT fit of 128 simulated data points on a 12-mm radius circle about the reference axis. They find that the FFT method gives smaller coefficient uncertainties that the Least Squares method when the data are within similar areas. The Least Squares method compares more favorably when a larger number of data points are used within a rectangular area of 30-mm vertical by 60-mm horizontal--perhaps the largest area within the 35-mm x 75-mm vacuum chamber for which data could be obtained. For a grid with 0.5-mm spacing within the 30 x 60 mm area the Least Squares fit gives much smaller uncertainties than the FFT. They are therefore in the favorable position of having two methods which can determine the multipole coefficients to much better accuracy than the tolerances specified to General Dynamics. The FFT method may be preferable since it requires only one Hall probe rather than the four envisioned for the least squares grid data. However least squares can attain better accuracy with fewer probe movements. The time factor in acquiring the data will likely be the determining factor in choice of method. They should further explore least squares analysis of a Fourier expansion of data on a circle or arc of a circle since that method gives coefficient uncertainties without need for multiple independent sets of data as needed by the FFT method.
Multiparameter linear least-squares fitting to Poisson data one count at a time
NASA Technical Reports Server (NTRS)
Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.
1995-01-01
A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number n(sub i) of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts w(sub i) in the weighted LLSQ method when square root of n(sub i) instead of square root of bar-n(sub i) is used to approximate the uncertainties, sigma(sub i), in the data, where bar-n(sub i) = E(n(sub i)), the expected value of N(sub i). We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 ke
Multiparameter linear least-squares fitting to Poisson data one count at a time
NASA Technical Reports Server (NTRS)
Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.
1995-01-01
A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number n(sub i) of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts w(sub i) in the weighted LLSQ method when square root of n(sub i) instead of square root of bar-n(sub i) is used to approximate the uncertainties, sigma(sub i), in the data, where bar-n(sub i) = E(n(sub i)), the expected value of N(sub i). We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 ke
A Nonlinear Adaptive Beamforming Algorithm Based on Least Squares Support Vector Regression
Wang, Lutao; Jin, Gang; Li, Zhengzhou; Xu, Hongbin
2012-01-01
To overcome the performance degradation in the presence of steering vector mismatches, strict restrictions on the number of available snapshots, and numerous interferences, a novel beamforming approach based on nonlinear least-square support vector regression machine (LS-SVR) is derived in this paper. In this approach, the conventional linearly constrained minimum variance cost function used by minimum variance distortionless response (MVDR) beamformer is replaced by a squared-loss function to increase robustness in complex scenarios and provide additional control over the sidelobe level. Gaussian kernels are also used to obtain better generalization capacity. This novel approach has two highlights, one is a recursive regression procedure to estimate the weight vectors on real-time, the other is a sparse model with novelty criterion to reduce the final size of the beamformer. The analysis and simulation tests show that the proposed approach offers better noise suppression capability and achieve near optimal signal-to-interference-and-noise ratio (SINR) with a low computational burden, as compared to other recently proposed robust beamforming techniques.
Evaluation of unconfined-aquifer parameters from pumping test data by nonlinear least squares
Heidari, M.; Moench, A.
1997-01-01
Nonlinear least squares (NLS) with automatic differentiation was used to estimate aquifer parameters from drawdown data obtained from published pumping tests conducted in homogeneous, water-table aquifers. The method is based on a technique that seeks to minimize the squares of residuals between observed and calculated drawdown subject to bounds that are placed on the parameter of interest. The analytical model developed by Neuman for flow to a partially penetrating well of infinitesimal diameter situated in an infinite, homogeneous and anisotropic aquifer was used to obtain calculated drawdown. NLS was first applied to synthetic drawdown data from a hypothetical but realistic aquifer to demonstrate that the relevant hydraulic parameters (storativity, specific yield, and horizontal and vertical hydraulic conductivity) can be evaluated accurately. Next the method was used to estimate the parameters at three field sites with widely varying hydraulic properties. NLS produced unbiased estimates of the aquifer parameters that are close to the estimates obtained with the same data using a visual curve-matching approach. Small differences in the estimates are a consequence of subjective interpretation introduced in the visual approach.
Improvement of structural models using covariance analysis and nonlinear generalized least squares
NASA Technical Reports Server (NTRS)
Glaser, R. J.; Kuo, C. P.; Wada, B. K.
1992-01-01
The next generation of large, flexible space structures will be too light to support their own weight, requiring a system of structural supports for ground testing. The authors have proposed multiple boundary-condition testing (MBCT), using more than one support condition to reduce uncertainties associated with the supports. MBCT would revise the mass and stiffness matrix, analytically qualifying the structure for operation in space. The same procedure is applicable to other common test conditions, such as empty/loaded tanks and subsystem/system level tests. This paper examines three techniques for constructing the covariance matrix required by nonlinear generalized least squares (NGLS) to update structural models based on modal test data. The methods range from a complicated approach used to generate the simulation data (i.e., the correct answer) to a diagonal matrix based on only two constants. The results show that NGLS is very insensitive to assumptions about the covariance matrix, suggesting that a workable NGLS procedure is possible. The examples also indicate that the multiple boundary condition procedure more accurately reduces errors than individual boundary condition tests alone.
Nair, S P; Righetti, R
2015-05-07
Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.
NASA Astrophysics Data System (ADS)
Li, T.; Griffiths, W. D.; Chen, J.
2017-08-01
The Maximum Likelihood method and the Linear Least Squares (LLS) method have been widely used to estimate Weibull parameters for reliability of brittle and metal materials. In the last 30 years, many researchers focused on the bias of Weibull modulus estimation, and some improvements have been achieved, especially in the case of the LLS method. However, there is a shortcoming in these methods for a specific type of data, where the lower tail deviates dramatically from the well-known linear fit in a classic LLS Weibull analysis. This deviation can be commonly found from the measured properties of materials, and previous applications of the LLS method on this kind of dataset present an unreliable linear regression. This deviation was previously thought to be due to physical flaws (i.e., defects) contained in materials. However, this paper demonstrates that this deviation can also be caused by the linear transformation of the Weibull function, occurring in the traditional LLS method. Accordingly, it may not be appropriate to carry out a Weibull analysis according to the linearized Weibull function, and the Non-linear Least Squares method (Non-LS) is instead recommended for the Weibull modulus estimation of casting properties.
Angelis, Georgios I; Matthews, Julian C; Kotasidis, Fotis A; Markiewicz, Pawel J; Lionheart, William R; Reader, Andrew J
2014-11-01
Estimation of nonlinear micro-parameters is a computationally demanding and fairly challenging process, since it involves the use of rather slow iterative nonlinear fitting algorithms and it often results in very noisy voxel-wise parametric maps. Direct reconstruction algorithms can provide parametric maps with reduced variance, but usually the overall reconstruction is impractically time consuming with common nonlinear fitting algorithms. In this work we employed a recently proposed direct parametric image reconstruction algorithm to estimate the parametric maps of all micro-parameters of a two-tissue compartment model, used to describe the kinetics of [[Formula: see text]F]FDG. The algorithm decouples the tomographic and the kinetic modelling problems, allowing the use of previously developed post-reconstruction methods, such as the generalised linear least squares (GLLS) algorithm. Results on both clinical and simulated data showed that the proposed direct reconstruction method provides considerable quantitative and qualitative improvements for all micro-parameters compared to the conventional post-reconstruction fitting method. Additionally, region-wise comparison of all parametric maps against the well-established filtered back projection followed by post-reconstruction non-linear fitting, as well as the direct Patlak method, showed substantial quantitative agreement in all regions. The proposed direct parametric reconstruction algorithm is a promising approach towards the estimation of all individual microparameters of any compartment model. In addition, due to the linearised nature of the GLLS algorithm, the fitting step can be very efficiently implemented and, therefore, it does not considerably affect the overall reconstruction time.
NASA Astrophysics Data System (ADS)
Tellinghuisen, Joel
2000-09-01
A scientific data analysis and presentation program (KaleidaGraph, Synergy Software, available for both Macintosh and Windows platforms) is adopted as the main data analysis tool in the undergraduate physical chemistry teaching laboratory. The capabilities of this program (and others of this type) are illustrated with application to some common data analysis problems in the laboratory curriculum, most of which require nonlinear least-squares treatments. The examples include (i) a straight line through the origin, and its transformation into a weighted average; (ii) a declining exponential with a background, with application to first-order kinetics data; (iii) the analysis of vapor pressure data by both unweighted fitting to an exponential form and weighted fitting to a linear logarithmic relationship; (iv) the analysis of overlapped spectral lines as sums of Gaussians, with application to the H/D atomic spectrum; (v) the direct fitting of IR rotation-vibration spectral line positions (HCl); and (vi) a two-function model for bomb calorimetry temperature vs time data. Procedures for implementing the use of such software in the teaching laboratory are discussed.
A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting.
Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue
2016-07-29
The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals.
A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting
Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue
2016-01-01
The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals. PMID:27483275
Computer program for fitting low-order polynomial splines by method of least squares
NASA Technical Reports Server (NTRS)
Smith, P. J.
1972-01-01
FITLOS is computer program which implements new curve fitting technique. Main program reads input data, calls appropriate subroutines for curve fitting, calculates statistical analysis, and writes output data. Method was devised as result of need to suppress noise in calibration of multiplier phototube capacitors.
XECT--a least squares curve fitting program for generalized radiotracer clearance model.
Szczesny, S; Turczyński, B
1991-01-01
The program uses the joint Monte Carlo-Simplex algorithm for fitting the generalized, non-monoexponential model of externally detected decay of radiotracer activity in the tissue. The optimal values of the model parameters (together with the rate of the blood flow) are calculated. A table and plot of the experimental points and the fitted curve are generated. The program was written in Borland's Turbo Pascal 5.5 for the IBM PC XT/AT and compatible microcomputers.
Huggins, Frank E; Kim, Dae-Jung; Dunn, Brian C; Eyring, Edward M; Huffman, Gerald P
2009-06-01
A detailed comparison has been made of determinations by (57)Fe Mössbauer spectroscopy and four different XAFS spectroscopic methods of %Fe as hematite and ferrihydrite in 11 iron-based SBA-15 catalyst formulations. The four XAFS methods consisted of least-squares fitting of iron XANES, d(XANES)/dE, and EXAFS (k(3)chi and k(2)chi) spectra to the corresponding standard spectra of hematite and ferrihydrite. The comparison showed that, for this particular application, the EXAFS methods were superior to the XANES methods in reproducing the results of the benchmark Mössbauer method in large part because the EXAFS spectra of the two iron-oxide standards were much less correlated than the corresponding XANES spectra. Furthermore, the EXAFS and Mössbauer results could be made completely consistent by inclusion of a factor of 1.3+/-0.05 for the ratio of the Mössbauer recoilless fraction of hematite relative to that of ferrihydrite at room temperature (293K). This difference in recoilless fraction is attributed to the nanoparticle nature of the ferrihydrite compared to the bulk nature of the hematite. Also discussed are possible alternative non-least-squares XAFS methods for determining the iron speciation in this application as well as criteria for deciding whether or not least-squares XANES methods should be applied for the determination of element speciation in unknown materials.
Huggins, F.; Kim, D; Dunn, B; Eyring, E; Huffman, G
2009-01-01
A detailed comparison has been made of determinations by {sup 57}Fe M{umlt o}ssbauer spectroscopy and four different XAFS spectroscopic methods of %Fe as hematite and ferrihydrite in 11 iron-based SBA-15 catalyst formulations. The four XAFS methods consisted of least-squares fitting of iron XANES, d(XANES)/dE, and EXAFS (k{sup 3}chi and k{sup 2}chi) spectra to the corresponding standard spectra of hematite and ferrihydrite. The comparison showed that, for this particular application, the EXAFS methods were superior to the XANES methods in reproducing the results of the benchmark M{umlt o}ssbauer method in large part because the EXAFS spectra of the two iron-oxide standards were much less correlated than the corresponding XANES spectra. Furthermore, the EXAFS and M{umlt o}ssbauer results could be made completely consistent by inclusion of a factor of 1.3 {+-} 0.05 for the ratio of the M{umlt o}ssbauer recoilless fraction of hematite relative to that of ferrihydrite at room temperature (293 K). This difference in recoilless fraction is attributed to the nanoparticle nature of the ferrihydrite compared to the bulk nature of the hematite. Also discussed are possible alternative non-least-squares XAFS methods for determining the iron speciation in this application as well as criteria for deciding whether or not least-squares XANES methods should be applied for the determination of element speciation in unknown materials.
Improved mapping of radio sources from VLBI data by least-square fit
NASA Technical Reports Server (NTRS)
Rodemich, E. R.
1985-01-01
A method is described for producing improved mapping of radio sources from Very Long Base Interferometry (VLBI) data. The method described is more direct than existing Fourier methods, is often more accurate, and runs at least as fast. The visibility data is modeled here, as in existing methods, as a function of the unknown brightness distribution and the unknown antenna gains and phases. These unknowns are chosen so that the resulting function values are as near as possible to the observed values. If researchers use the radio mapping source deviation to measure the closeness of this fit to the observed values, they are led to the problem of minimizing a certain function of all the unknown parameters. This minimization problem cannot be solved directly, but it can be attacked by iterative methods which we show converge automatically to the minimum with no user intervention. The resulting brightness distribution will furnish the best fit to the data among all brightness distributions of given resolution.
NASA Technical Reports Server (NTRS)
Williams, S. D.; Curry, D. M.
1974-01-01
Three minimizing techniques are evaluated to determine the most efficient method for minimizing the weight of a thermal protection system and for reducing computer usage time. The methods used (numerical optimization and nonlinear least squares) for solving the minimum-weight problem involving more than one material and more than one constraint are discussed. In addition, the one material and one constraint problem is discussed.
The IAEA neutron coincidence counting (INCC) and the DEMING least-squares fitting programs
Krick, M.S.; Harker, W.C.; Rinard, P.M.; Wenz, T.R.; Lewis, W.; Pham, P.; Ridder, P. de
1998-12-01
Two computer programs are described: (1) the INCC (IAEA or International Neutron Coincidence Counting) program and (2) the DEMING curve-fitting program. The INCC program is an IAEA version of the Los Alamos NCC (Neutron Coincidence Counting) code. The DEMING program is an upgrade of earlier Windows{reg_sign} and DOS codes with the same name. The versions described are INCC 3.00 and DEMING 1.11. The INCC and DEMING codes provide inspectors with the software support needed to perform calibration and verification measurements with all of the neutron coincidence counting systems used in IAEA inspections for the nondestructive assay of plutonium and uranium.
2014-10-09
2) where wj is the complex valued weight for frequency fj , wti ∈ [0, 1] is the real valued sample weight applied to each sample ti, and...is the harmonic matrix with elements Aij = exp(j2πfjti), (4) Wt is the diagonal matrix of sample weights with elements Wtii = wti , and H denotes...samples are precluded from the fit by thesholding the clutter-matched filtered data and setting the sample weights Wti for any high valued samples i
Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction
Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir
2016-10-20
Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.
Galerkin v. least-squares Petrov-Galerkin projection in nonlinear model reduction
NASA Astrophysics Data System (ADS)
Carlberg, Kevin; Barone, Matthew; Antil, Harbir
2017-02-01
Least-squares Petrov-Galerkin (LSPG) model-reduction techniques such as the Gauss-Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge-Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be 'matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.
Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun
2014-01-01
Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.
Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun
2014-01-01
Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness. PMID:24772020
NASA Astrophysics Data System (ADS)
Cardoso, Rui P. R.; Cesar de Sa, J. M. A.
2014-06-01
IsoGeometric Analysis (IGA) is increasing its popularity as a new numerical tool for the analysis of structures. IGA provides: (i) the possibility of using higher order polynomials for the basis functions; (ii) the smoothness for contact analysis; (iii) the possibility to operate directly on CAD geometry. The major drawback of IGA is the non-interpolatory characteristic of the basis functions, which adds a difficulty in handling essential boundary conditions. Nevertheless, IGA suffers from the same problems depicted by other methods when it comes to reproduce isochoric and transverse shear strain deformations, especially for low order basis functions. In this work, projection techniques based on the moving least square (MLS) approximations are used to alleviate both the volumetric and the transverse shear lockings in IGA. The main objective is to project the isochoric and transverse shear deformations from lower order subspaces by using the MLS, alleviating in this way the volumetric and the transverse shear locking on the fully-integrated space. Because in IGA different degrees in the approximation functions can be used, different Gauss integration rules can also be employed, making the procedures for locking treatment in IGA very dependent on the degree of the approximation functions used. The blending of MLS with Non-Uniform Rational B-Splines (NURBS) basis functions is a methodology to overcome different locking pathologies in IGA which can be also used for enrichment procedures. Numerical examples for three-dimensional NURBS with only translational degrees of freedom are presented for both shell-type and plane strain structures.
Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction
Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir
2016-10-20
Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less
Haaland, David Michael; Melgaard, David Kennett
2003-06-01
A manuscript describing this work summarized below has been submitted to Applied Spectroscopy. Comparisons of prediction models from the new ACLS and PLS multivariate spectral analysis methods were conducted using simulated data with deviations from the idealized model. Simulated uncorrelated concentration errors, and uncorrelated and correlated spectral noise were included to evaluate the methods on situations representative of experimental data. The simulations were based on pure spectral components derived from real near-infrared spectra of multicomponent dilute aqueous solutions containing glucose, urea, ethanol, and NaCl in the concentration range from 0-500 mg/dL. The statistical significance of differences was evaluated using the Wilcoxon signed rank test. The prediction abilities with nonlinearities present were similar for both calibration methods although concentration noise, number of samples, and spectral noise distribution sometimes affected one method more than the other. In the case of ideal errors and in the presence of nonlinear spectral responses, the differences between the standard error of predictions of the two methods were sometimes statistically significant, but the differences were always small in magnitude. Importantly, SRACLS was found to be competitive with PLS when component concentrations were only known for a single component. Thus, SRACLS has a distinct advantage over standard CLS methods that require that all spectral components be included in the model. In contrast to simulations with ideal error, SRACLS often generated models with superior prediction performance relative to PLS when the simulations were more realistic and included either non-uniform errors and/or correlated errors. Since the generalized ACLS algorithm is compatible with the PACLS method that allows rapid updating of models during prediction, the powerful combination of PACLS with ACLS is very promising for rapidly maintaining and transferring models for system
Kawano, Akio; Tokmakov, Igor V; Thompson, Donald L; Wagner, Albert F; Minkoff, Michael
2006-02-07
In standard applications of interpolating moving least squares (IMLS) for fitting a potential-energy surface (PES), all available ab initio points are used. Because remote ab initio points negligibly influence IMLS accuracy and increase IMLS time-to-solution, we present two methods to locally restrict the number of points included in a particular fit. The fixed radius cutoff (FRC) method includes ab initio points within a hypersphere of fixed radius. The density adaptive cutoff (DAC) method includes points within a hypersphere of variable radius depending on the point density. We test these methods by fitting a six-dimensional analytical PES for hydrogen peroxide. Both methods reduce the IMLS time-to-solution by about an order of magnitude relative to that when no cutoff method is used. The DAC method is more robust and efficient than the FRC method.
Flickner, M; Hafner, J; Rodriguez, E J; Sanz, J C
1996-01-01
Presents a new covariant basis, dubbed the quasi-orthogonal Q-spline basis, for the space of n-degree periodic uniform splines with k knots. This basis is obtained analogously to the B-spline basis by scaling and periodically translating a single spline function of bounded support. The construction hinges on an important theorem involving the asymptotic behavior (in the dimension) of the inverse of banded Toeplitz matrices. The authors show that the Gram matrix for this basis is nearly diagonal, hence, the name "quasi-orthogonal". The new basis is applied to the problem of approximating closed digital curves in 2D images by least-squares fitting. Since the new spline basis is almost orthogonal, the least-squares solution can be approximated by decimating a convolution between a resolution-dependent kernel and the given data. The approximating curve is expressed as a linear combination of the new spline functions and new "control points". Another convolution maps these control points to the classical B-spline control points. A generalization of the result has relevance to the solution of regularized fitting problems.
NASA Astrophysics Data System (ADS)
Martin, Y. L.
The performance of quantitative analysis of 1D NMR spectra depends greatly on the choice of the NMR signal model. Complex least-squares analysis is well suited for optimizing the quantitative determination of spectra containing a limited number of signals (<30) obtained under satisfactory conditions of signal-to-noise ratio (>20). From a general point of view it is concluded, on the basis of mathematical considerations and numerical simulations, that, in the absence of truncation of the free-induction decay, complex least-squares curve fitting either in the time or in the frequency domain and linear-prediction methods are in fact nearly equivalent and give identical results. However, in the situation considered, complex least-squares analysis in the frequency domain is more flexible since it enables the quality of convergence to be appraised at every resonance position. An efficient data-processing strategy has been developed which makes use of an approximate conjugate-gradient algorithm. All spectral parameters (frequency, damping factors, amplitudes, phases, initial delay associated with intensity, and phase parameters of a baseline correction) are simultaneously managed in an integrated approach which is fully automatable. The behavior of the error as a function of the signal-to-noise ratio is theoretically estimated, and the influence of apodization is discussed. The least-squares curve fitting is theoretically proved to be the most accurate approach for quantitative analysis of 1D NMR data acquired with reasonable signal-to-noise ratio. The method enables complex spectral residuals to be sorted out. These residuals, which can be cumulated thanks to the possibility of correcting for frequency shifts and phase errors, extract systematic components, such as isotopic satellite lines, and characterize the shape and the intensity of the spectral distortion with respect to the Lorentzian model. This distortion is shown to be nearly independent of the chemical species
Bouaricha, A.; Schnabel, R.B.
1996-12-31
This paper describes a modular software package for solving systems of nonlinear equations and nonlinear least squares problems, using a new class of methods called tensor methods. It is intended for small to medium-sized problems, say with up to 100 equations and unknowns, in cases where it is reasonable to calculate the Jacobian matrix or approximate it by finite differences at each iteration. The software allows the user to select between a tensor method and a standard method based upon a linear model. The tensor method models F({ital x}) by a quadratic model, where the second-order term is chosen so that the model is hardly more expensive to form, store, or solve than the standard linear model. Moreover, the software provides two different global strategies, a line search and a two- dimensional trust region approach. Test results indicate that, in general, tensor methods are significantly more efficient and robust than standard methods on small and medium-sized problems in iterations and function evaluations.
Tellinghuisen, Joel
2016-03-01
Relative expression ratios are commonly estimated in real-time qPCR studies by comparing the quantification cycle for the target gene with that for a reference gene in the treatment samples, normalized to the same quantities determined for a control sample. For the "standard curve" design, where data are obtained for all four of these at several dilutions, nonlinear least squares can be used to assess the amplification efficiencies (AE) and the adjusted ΔΔCq and its uncertainty, with automatic inclusion of the effect of uncertainty in the AEs. An algorithm is illustrated for the KaleidaGraph program.
Guo, Yin; Harding, Lawrence B; Wagner, Albert F; Minkoff, Michael; Thompson, Donald L
2007-03-14
Classical trajectories have been used to compute rates for the unimolecular reaction H2CN-->H+HCN on a fitted ab initio potential energy surface (PES). The ab initio energies were obtained from CCSD(T)/aug-cc-pvtz electronic structure calculations. The ab initio energies were fitted by the interpolating moving least-squares (IMLS) method. This work continues the development of the IMLS method for producing ab initio PESs for use in molecular dynamics simulations of many-atom systems. A dual-level scheme was used in which the preliminary selection of data points was done using a low-level theory and the points used for fitting the final PES were obtained at the desired higher level of theory. Classical trajectories were used on various low-level IMLS fits to tune the fit to the unimolecular reaction under study. Procedures for efficiently picking data points, selecting basis functions, and defining cutoff limits to exclude distant points were investigated. The accuracy of the fitted PES was assessed by comparing interpolated values of quantities to the corresponding ab initio values. With as little as 330 ab initio points classical trajectory rate constants were converged to 5%-10% and the rms error over the six-dimensional region sampled by the trajectories was a few tenths of a kcal/mol.
NASA Astrophysics Data System (ADS)
Jain, Jalaj; Prakash, Ram; Vyas, Gheesa Lal; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana; Halder, Nilanjan; Choyal, Yaduvendra
2015-12-01
In the present work an effort has been made to estimate the plasma parameters simultaneously like—electron density, electron temperature, ground state atom density, ground state ion density and metastable state density from the observed visible spectra of penning plasma discharge (PPD) source using least square fitting. The analysis is performed for the prominently observed neutral helium lines. The atomic data and analysis structure (ADAS) database is used to provide the required collisional-radiative (CR) photon emissivity coefficients (PECs) values under the optical thin plasma condition in the analysis. With this condition the estimated plasma temperature from the PPD is found rather high. It is seen that the inclusion of opacity in the observed spectral lines through PECs and addition of diffusion of neutrals and metastable state species in the CR-model code analysis improves the electron temperature estimation in the simultaneous measurement.
NASA Astrophysics Data System (ADS)
López Martínez, M. C.; Díaz Baños, F. G.; Ortega Retuerta, A.; García de La Torre, J.
2003-09-01
A procedure, based on the least-squares principle for simultaneously fitting two or more linear data sets with a common intercept, is described. Expressions are derived to calculate the common intercept and the slopes. The procedure is applied to several laboratory experiments. Particularly, this procedure is employed in the measurement of the intrinsic viscosity, which expresses the concentration dependence of the viscosity of a dilute polymer solution. The intrinsic viscosity is determined by extrapolation to zero concentration of a polymer quantity that involves the solution viscosity and concentration. The extrapolation can be done by two procedures, associated with the names of Huggins and Kraemer, both yielding the intrinsic viscosity as the common intercept. A numerical procedure, implemented with the computer program VISFIT was devised. This procedure is employed for the data analysis in the determination of intrinsic viscosities.
Huang, Xinrui; Zhou, Yun; Bao, Shangliang; Huang, Sung-Cheng
2007-01-01
Parametric images generated from dynamic positron emission tomography (PET) studies are useful for presenting functional/biological information in the 3-dimensional space, but usually suffer from their high sensitivity to image noise. To improve the quality of these images, we proposed in this study a modified linear least square (LLS) fitting method named cLLS that incorporates a clustering-based spatial constraint for generation of parametric images from dynamic PET data of high noise levels. In this method, the combination of K-means and hierarchical cluster analysis was used to classify dynamic PET data. Compared with conventional LLS, cLLS can achieve high statistical reliability in the generated parametric images without incurring a high computational burden. The effectiveness of the method was demonstrated both with computer simulation and with a human brain dynamic FDG PET study. The cLLS method is expected to be useful for generation of parametric images from dynamic FDG PET study. PMID:18273393
Cooley, R.L.; Hill, M.C.
1992-01-01
Three methods of solving nonlinear least-squares problems were compared for robustness and efficiency using a series of hypothetical and field problems. A modified Gauss-Newton/full Newton hybrid method (MGN/FN) and an analogous method for which part of the Hessian matrix was replaced by a quasi-Newton approximation (MGN/QN) solved some of the problems with appreciably fewer iterations than required using only a modified Gauss-Newton (MGN) method. In these problems, model nonlinearity and a large variance for the observed data apparently caused MGN to converge more slowly than MGN/FN or MGN/QN after the sum of squared errors had almost stabilized. Other problems were solved as efficiently with MGN as with MGN/FN or MGN/QN. Because MGN/FN can require significantly more computer time per iteration and more computer storage for transient problems, it is less attractive for a general purpose algorithm than MGN/QN.
Deng, Bai-Chuan; Yun, Yong-Huan; Liang, Yi-Zeng; Cao, Dong-Sheng; Xu, Qing-Song; Yi, Lun-Zhao; Huang, Xin
2015-06-23
Partial least squares (PLS) is one of the most widely used methods for chemical modeling. However, like many other parameter tunable methods, it has strong tendency of over-fitting. Thus, a crucial step in PLS model building is to select the optimal number of latent variables (nLVs). Cross-validation (CV) is the most popular method for PLS model selection because it selects a model from the perspective of prediction ability. However, a clear minimum of prediction errors may not be obtained in CV which makes the model selection difficult. To solve the problem, we proposed a new strategy for PLS model selection which combines the cross-validated coefficient of determination (Qcv(2)) and model stability (S). S is defined as the stability of PLS regression vectors which is obtained using model population analysis (MPA). The results show that, when a clear maximum of Qcv(2) is not obtained, S can provide additional information of over-fitting and it helps in finding the optimal nLVs. Compared with other regression vector based indictors such as the Euclidean 2-norm (B2), the Durbin Watson statistic (DW) and the jaggedness (J), S is more sensitive to over-fitting. The model selected by our method has both good prediction ability and stability.
NASA Astrophysics Data System (ADS)
Gardner, Robin P.; Xu, Libai
2009-10-01
The Center for Engineering Applications of Radioisotopes (CEAR) has been working for over a decade on the Monte Carlo library least-squares (MCLLS) approach for treating non-linear radiation analyzer problems including: (1) prompt gamma-ray neutron activation analysis (PGNAA) for bulk analysis, (2) energy-dispersive X-ray fluorescence (EDXRF) analyzers, and (3) carbon/oxygen tool analysis in oil well logging. This approach essentially consists of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required background libraries. These libraries are then used in the linear library least-squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. Iterations of this are used until the LLS values agree with the composition used to generate the libraries. The current status of the methods (and topics) necessary to implement the MCLLS approach is reported. This includes: (1) the Monte Carlo codes such as CEARXRF, CEARCPG, and CEARCO for forward generation of the necessary elemental library spectra for the LLS calculation for X-ray fluorescence, neutron capture prompt gamma-ray analyzers, and carbon/oxygen tools; (2) the correction of spectral pulse pile-up (PPU) distortion by Monte Carlo simulation with the code CEARIPPU; (3) generation of detector response functions (DRF) for detectors with linear and non-linear responses for Monte Carlo simulation of pulse-height spectra; and (4) the use of the differential operator (DO) technique to make the necessary iterations for non-linear responses practical. In addition to commonly analyzed single spectra, coincidence spectra or even two-dimensional (2-D) coincidence spectra can also be used in the MCLLS approach and may provide more accurate results.
Wang, Lu; Xu, Lisheng; Feng, Shuting; Meng, Max Q-H; Wang, Kuanquan
2013-11-01
Analysis of pulse waveform is a low cost, non-invasive method for obtaining vital information related to the conditions of the cardiovascular system. In recent years, different Pulse Decomposition Analysis (PDA) methods have been applied to disclose the pathological mechanisms of the pulse waveform. All these methods decompose single-period pulse waveform into a constant number (such as 3, 4 or 5) of individual waves. Furthermore, those methods do not pay much attention to the estimation error of the key points in the pulse waveform. The estimation of human vascular conditions depends on the key points' positions of pulse wave. In this paper, we propose a Multi-Gaussian (MG) model to fit real pulse waveforms using an adaptive number (4 or 5 in our study) of Gaussian waves. The unknown parameters in the MG model are estimated by the Weighted Least Squares (WLS) method and the optimized weight values corresponding to different sampling points are selected by using the Multi-Criteria Decision Making (MCDM) method. Performance of the MG model and the WLS method has been evaluated by fitting 150 real pulse waveforms of five different types. The resulting Normalized Root Mean Square Error (NRMSE) was less than 2.0% and the estimation accuracy for the key points was satisfactory, demonstrating that our proposed method is effective in compressing, synthesizing and analyzing pulse waveforms.
Estimation of the free core nutation period by the sliding-window complex least-squares fit method
NASA Astrophysics Data System (ADS)
Zhou, Yonghong; Zhu, Qiang; Salstein, David A.; Xu, Xueqing; Shi, Si; Liao, Xinhao
2016-05-01
Estimation of the free core nutation (FCN) period is a challenging prospect. Mostly, two methods, one direct and one indirect, have been applied in the past to address the problem by analyzing the Earth orientation parameters observed by the very long baseline interferometry. The indirect method estimates the FCN period from resonance effects of the FCN on forced nutation terms, whereas the direct method estimates the FCN period using the Fourier Transform (FT) approach. However, the FCN period estimated by the direct FT technique suffers from the non-stationary characteristics of celestial pole offsets (CPO). In this study, the FCN period is estimated by another direct method, i.e., the sliding-window complex least-squares fit method (SCLF). The estimated values of the FCN period for the full set of 1984.0-2014.0 and four subsets (1984.0-2000.0, 2000.0-2014.0, 1984.0-1990.0, 1990.0-2014.0) range from -428.8 to -434.3 mean solar days. From the FT to the SCLF method, the estimate uncertainty of the FCN period falls from several tens of days to several days. Thus, the SCLF method may serve as an independent direct way to estimate the FCN period, complementing and validating the indirect resonance method that has been frequently used before.
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
Ning, Hanwen; Qing, Guangyan; Jing, Xingjian
2016-11-01
The identification of nonlinear spatiotemporal dynamical systems given by partial differential equations has attracted a lot of attention in the past decades. Several methods, such as searching principle-based algorithms, partially linear kernel methods, and coupled lattice methods, have been developed to address the identification problems. However, most existing methods have some restrictions on sampling processes in that the sampling intervals should usually be very small and uniformly distributed in spatiotemporal domains. These are actually not applicable for some practical applications. In this paper, to tackle this issue, a novel kernel-based learning algorithm named integral least square regularization regression (ILSRR) is proposed, which can be used to effectively achieve accurate derivative estimation for nonlinear functions in the time domain. With this technique, a discretization method named inverse meshless collocation is then developed to realize the dimensional reduction of the system to be identified. Thereafter, with this novel inverse meshless collocation model, the ILSRR, and a multiple-kernel-based learning algorithm, a multistep identification method is systematically proposed to address the identification problem of spatiotemporal systems with pointwise nonuniform observations. Numerical studies for benchmark systems with necessary discussions are presented to illustrate the effectiveness and the advantages of the proposed method.
Zhou, Hanying; Homer, Margie L.; Shevade, Abhijit V.; Ryan, Margaret A.
2006-01-01
The Jet Propulsion Laboratory has recently developed and built an electronic nose (ENose) using a polymer-carbon composite sensing array. This ENose is designed to be used for air quality monitoring in an enclosed space, and is designed to detect, identify and quantify common contaminants at concentrations in the parts-per-million range. Its capabilities were demonstrated in an experiment aboard the National Aeronautics and Space Administration's Space Shuttle Flight STS-95. This paper describes a modified nonlinear least-squares based algorithm developed to analyze data taken by the ENose, and its performance for the identification and quantification of single gases and binary mixtures of twelve target analytes in clean air. Results from laboratory-controlled events demonstrate the effectiveness of the algorithm to identify and quantify a gas event if concentration exceeds the ENose detection threshold. Results from the flight test demonstrate that the algorithm correctly identifies and quantifies all registered events (planned or unplanned, as singles or mixtures) with no false positives and no inconsistencies with the logged events and the independent analysis of air samples.
NASA Technical Reports Server (NTRS)
Argentiero, P. D.
1978-01-01
It is shown that the least squares collocation approach to estimating geodetic parameters is identical to conventional minimum variance estimation. Hence, the least squares collocation estimator can be derived either by minimizing the usual least squares quadratic loss function or by computing a conditional expectation by means of the regression equation. When a deterministic functional relationship between the data and the parameters to be estimated is available, one can implement a least squares solution using the functional relation to obtain an equation of condition. It is proved the solution so obtained is identical to what is obtained through least squares collocation. The implications of this equivalance for the estimation of mean gravity anomalies are discussed.
NASA Technical Reports Server (NTRS)
Chang, T. S.
1974-01-01
A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.
NASA Astrophysics Data System (ADS)
Khan, F.; Enzmann, F.; Kersten, M.
2015-12-01
In X-ray computed microtomography (μXCT) image processing is the most important operation prior to image analysis. Such processing mainly involves artefact reduction and image segmentation. We propose a new two-stage post-reconstruction procedure of an image of a geological rock core obtained by polychromatic cone-beam μXCT technology. In the first stage, the beam-hardening (BH) is removed applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. The final BH-corrected image is extracted from the residual data, or the difference between the surface elevation values and the original grey-scale values. For the second stage, we propose using a least square support vector machine (a non-linear classifier algorithm) to segment the BH-corrected data as a pixel-based multi-classification task. A combination of the two approaches was used to classify a complex multi-mineral rock sample. The Matlab code for this approach is provided in the Appendix. A minor drawback is that the proposed segmentation algorithm may become computationally demanding in the case of a high dimensional training data set.
NASA Technical Reports Server (NTRS)
Gross, Bernard
1996-01-01
Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.
NASA Astrophysics Data System (ADS)
Liu, S.; Hanssen, R. F.; Samiei-Esfahany, S.; Hooper, A.; Van Leijen, F. J.
2012-01-01
We present a new method for separating ground defor- mation from atmospheric phase screen (APS) based on PSInSAR. By stochastic modeling of ground deformation and APS via their variance-covariance functions we can not only estimate the signals with the best accuracy but also assess the estimation accuracy using least-squares collocation[5]. We evaluate the APS estimated by our method and the APS obtained from a commonly used window-based filtering method [6] by comparing them to repeat-pass interferograms over ground surfaces outside the subsiding region of Mexico City. The comparison shows that our method results in a better estimation of APS than the filtering method which ignores the temporal variability of APS variance. Our method is desired when there are temporal gaps in a SAR time series. In such a case, the filtering method needs a large temporal window to suppress APS, which may lead to leakage from ground deformation to APS.
Dawes, Richard; Thompson, Donald L; Wagner, Albert F; Minkoff, Michael
2008-02-28
An accurate and efficient method for automated molecular global potential energy surface (PES) construction and fitting is demonstrated. An interpolating moving least-squares (IMLS) method is developed with the flexibility to fit various ab initio data: (1) energies, (2) energies and gradients, or (3) energies, gradients, and Hessian data. The method is automated and flexible so that a PES can be optimally generated for trajectories, spectroscopy, or other applications. High efficiency is achieved by employing local IMLS in which fitting coefficients are stored at a limited number of expansion points, thus eliminating the need to perform weighted least-squares fits each time the potential is evaluated. An automatic point selection scheme based on the difference in two successive orders of IMLS fits is used to determine where new ab initio data need to be calculated for the most efficient fitting of the PES. A simple scan of the coordinate is shown to work well to identify these maxima in one dimension, but this search strategy scales poorly with dimension. We demonstrate the efficacy of using conjugate gradient minimizations on the difference surface to locate optimal data point placement in high dimensions. Results that are indicative of the accuracy, efficiency, and scalability are presented for a one-dimensional model potential (Morse) as well as for three-dimensional (HCN), six-dimensional (HOOH), and nine-dimensional (CH4) molecular PESs.
Weighted total least squares formulated by standard least squares theory
NASA Astrophysics Data System (ADS)
Amiri-Simkooei, A.; Jazaeri, S.
2012-01-01
This contribution presents a simple, attractive, and flexible formulation for the weighted total least squares (WTLS) problem. It is simple because it is based on the well-known standard least squares theory; it is attractive because it allows one to directly use the existing body of knowledge of the least squares theory; and it is flexible because it can be used to a broad field of applications in the error-invariable (EIV) models. Two empirical examples using real and simulated data are presented. The first example, a linear regression model, takes the covariance matrix of the coefficient matrix as
Bayesian least squares deconvolution
NASA Astrophysics Data System (ADS)
Asensio Ramos, A.; Petit, P.
2015-11-01
Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
NASA Astrophysics Data System (ADS)
Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin
2017-06-01
This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.
NASA Astrophysics Data System (ADS)
Nørrelykke, Simon F.; Flyvbjerg, Henrik
2010-07-01
Optical tweezers and atomic force microscope (AFM) cantilevers are often calibrated by fitting their experimental power spectra of Brownian motion. We demonstrate here that if this is done with typical weighted least-squares methods, the result is a bias of relative size between -2/n and +1/n on the value of the fitted diffusion coefficient. Here, n is the number of power spectra averaged over, so typical calibrations contain 10%-20% bias. Both the sign and the size of the bias depend on the weighting scheme applied. Hence, so do length-scale calibrations based on the diffusion coefficient. The fitted value for the characteristic frequency is not affected by this bias. For the AFM then, force measurements are not affected provided an independent length-scale calibration is available. For optical tweezers there is no such luck, since the spring constant is found as the ratio of the characteristic frequency and the diffusion coefficient. We give analytical results for the weight-dependent bias for the wide class of systems whose dynamics is described by a linear (integro)differential equation with additive noise, white or colored. Examples are optical tweezers with hydrodynamic self-interaction and aliasing, calibration of Ornstein-Uhlenbeck models in finance, models for cell migration in biology, etc. Because the bias takes the form of a simple multiplicative factor on the fitted amplitude (e.g. the diffusion coefficient), it is straightforward to remove and the user will need minimal modifications to his or her favorite least-squares fitting programs. Results are demonstrated and illustrated using synthetic data, so we can compare fits with known true values. We also fit some commonly occurring power spectra once-and-for-all in the sense that we give their parameter values and associated error bars as explicit functions of experimental power-spectral values.
Nørrelykke, Simon F; Flyvbjerg, Henrik
2010-07-01
Optical tweezers and atomic force microscope (AFM) cantilevers are often calibrated by fitting their experimental power spectra of Brownian motion. We demonstrate here that if this is done with typical weighted least-squares methods, the result is a bias of relative size between -2/n and +1/n on the value of the fitted diffusion coefficient. Here, n is the number of power spectra averaged over, so typical calibrations contain 10%-20% bias. Both the sign and the size of the bias depend on the weighting scheme applied. Hence, so do length-scale calibrations based on the diffusion coefficient. The fitted value for the characteristic frequency is not affected by this bias. For the AFM then, force measurements are not affected provided an independent length-scale calibration is available. For optical tweezers there is no such luck, since the spring constant is found as the ratio of the characteristic frequency and the diffusion coefficient. We give analytical results for the weight-dependent bias for the wide class of systems whose dynamics is described by a linear (integro)differential equation with additive noise, white or colored. Examples are optical tweezers with hydrodynamic self-interaction and aliasing, calibration of Ornstein-Uhlenbeck models in finance, models for cell migration in biology, etc. Because the bias takes the form of a simple multiplicative factor on the fitted amplitude (e.g. the diffusion coefficient), it is straightforward to remove and the user will need minimal modifications to his or her favorite least-squares fitting programs. Results are demonstrated and illustrated using synthetic data, so we can compare fits with known true values. We also fit some commonly occurring power spectra once-and-for-all in the sense that we give their parameter values and associated error bars as explicit functions of experimental power-spectral values.
Sader, John E.; Yousefi, Morteza; Friend, James R.
2014-02-15
Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noise spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.
Walther, T
2006-08-01
In a previous paper, a new technique was introduced to determine the chemistry of crystallographically well-defined planar defects (such as straight interfaces, grain boundaries, twins, inversion or antiphase domain boundaries) in the presence of homogeneous solute segregation or selective doping. The technique is based on a linear least-squares fit using series of analytical (electron energy-loss or energy-dispersive X-ray) spectra acquired in a transmission electron microscope that is operated in nano-probe mode with the planar defect centred edge-on. First, additional notes on the use of proper k-factors and determination of Gibbsian excess segregation are given in this note. Using simulated data sets, it is shown that the linear least-squares fit improves both the accuracy and the robustness to noise beyond that obtainable by independently repeated measurements. It is then shown how the method originally developed for a stationary nano-probe mode in transmission electron microscopy can be extended to a focused electron beam that scans a square region in scanning transmission electron microscopy. The necessary modifications to scan geometry and corresponding numerical evaluation are described, and three different practical implementations are proposed.
Cao, Hui; Li, Yao-Jiang; Zhou, Yan; Wang, Yan-Xia
2014-11-01
To deal with nonlinear characteristics of spectra data for the thermal power plant flue, a nonlinear partial least square (PLS) analysis method with internal model based on neural network is adopted in the paper. The latent variables of the independent variables and the dependent variables are extracted by PLS regression firstly, and then they are used as the inputs and outputs of neural network respectively to build the nonlinear internal model by train process. For spectra data of flue gases of the thermal power plant, PLS, the nonlinear PLS with the internal model of back propagation neural network (BP-NPLS), the non-linear PLS with the internal model of radial basis function neural network (RBF-NPLS) and the nonlinear PLS with the internal model of adaptive fuzzy inference system (ANFIS-NPLS) are compared. The root mean square error of prediction (RMSEP) of sulfur dioxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 16.96%, 16.60% and 19.55% than that of PLS, respectively. The RMSEP of nitric oxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 8.60%, 8.47% and 10.09% than that of PLS, respectively. The RMSEP of nitrogen dioxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 2.11%, 3.91% and 3.97% than that of PLS, respectively. Experimental results show that the nonlinear PLS is more suitable for the quantitative analysis of glue gas than PLS. Moreover, by using neural network function which can realize high approximation of nonlinear characteristics, the nonlinear partial least squares method with internal model mentioned in this paper have well predictive capabilities and robustness, and could deal with the limitations of nonlinear partial least squares method with other internal model such as polynomial and spline functions themselves under a certain extent. ANFIS-NPLS has the best performance with the internal model of adaptive fuzzy inference system having ability to learn more and reduce the residuals effectively. Hence, ANFIS-NPLS is an
Using Least Squares for Error Propagation
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2015-01-01
The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…
Using Least Squares for Error Propagation
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2015-01-01
The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…
Three Perspectives on Teaching Least Squares
ERIC Educational Resources Information Center
Scariano, Stephen M.; Calzada, Maria
2004-01-01
The method of Least Squares is the most widely used technique for fitting a straight line to data, and it is typically discussed in several undergraduate courses. This article focuses on three developmentally different approaches for solving the Least Squares problem that are suitable for classroom exposition.
Bade, D; Parak, F
1976-12-22
The calculation of the magnetic susceptibility from a published term scheme for the ferrous iron in deoxygenated human haemoglobin is discussed and a procedure for the simultaneous least squares fit of susceptibility and Mössbauer data is presented. The application of this procedure to the appropriate measurements on human haemoglobin leads to a rearrangement of the low lying electronic levels of the iron. The term schemes received as results of two different sets of susceptibility data used in combination with one set of Mössbauer data overlap with their error bars. The obtained level scheme of the Fe is correlated with the distance of the iron atom from the haem plane and the distance Fe-HIS F8, and some biological implications of these correlations are discussed.
NASA Astrophysics Data System (ADS)
Pham, Phuong Thi Thu; Wada, Tomohisa
This paper presents a pilot-aided channel estimation method which is particularly suitable for mobile WiMAX 802.16e Downlink Partial Usage of Subchannel mode. Based on this mode, several commonly used channel estimation methods are studied and the method of least squares line fitting is proposed. As data of users are distributed onto permuted clusters of subcarriers in the transmitted OFDMA symbol, the proposed channel estimation method utilizes these advantages to provide better performance than conventional approaches while offering remarkably low complexity in practical implementation. Simulation results with different ITU-channels for mobile environments show that depending on situations, enhancement of 5dB or more in term of SNR can be achieved.
Goicoechea, Héctor C; Collado, María S; Satuf, María L; Olivieri, Alejandro C
2002-10-01
The complementary use of partial least-squares (PLS) multivariate calibration and artificial neural networks (ANNs) for the simultaneous spectrophotometric determination of three active components in a pharmaceutical formulation has been explored. The presence of non-linearities caused by chemical interactions was confirmed by a recently discussed methodology based on Mallows augmented partial residual plots. Ternary mixtures of chlorpheniramine, naphazoline and dexamethasone in a matrix of excipients have been resolved by using PLS for the two major analytes (chlorpheniramine and naphazoline) and ANNs for the minor one (dexamethasone). Notwithstanding the large number of constituents, their high degree of spectral overlap and the occurrence of non-linearities, rapid and simultaneous analysis has been achieved, with reasonably good accuracy and precision. No extraction procedures using non-aqueous solvents are required.
NASA Astrophysics Data System (ADS)
Legaie, D.; Pron, H.; Bissieux, C.
2008-11-01
Integral transforms (Laplace, Fourier, Hankel) are widely used to solve the heat diffusion equation. Moreover, it often appears relevant to realize the estimation of thermophysical properties in the transformed space. Here, an analytical model has been developed, leading to a well-posed inverse problem of parameter identification. Two black coatings, a thin black paint layer and an amorphous carbon film, were studied by photothermal infrared thermography. A Hankel transform has been applied on both thermal model and data and the estimation of thermal diffusivity has been achieved in the Hankel space. The inverse problem is formulated as a non-linear least square problem and a Gauss-Newton algorithm is used for the parameter identification.
Essa, Khalid S.
2013-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472
Feng, Jie; Wang, Zhe; Li, Lizhi; Li, Zheng; Ni, Weidou
2013-03-01
A nonlinearized multivariate dominant factor-based partial least-squares (PLS) model was applied to coal elemental concentration measurement. For C concentration determination in bituminous coal, the intensities of multiple characteristic lines of the main elements in coal were applied to construct a comprehensive dominant factor that would provide main concentration results. A secondary PLS thereafter applied would further correct the model results by using the entire spectral information. In the dominant factor extraction, nonlinear transformation of line intensities (based on physical mechanisms) was embedded in the linear PLS to describe nonlinear self-absorption and inter-element interference more effectively and accurately. According to the empirical expression of self-absorption and Taylor expansion, nonlinear transformations of atomic and ionic line intensities of C were utilized to model self-absorption. Then, the line intensities of other elements, O and N, were taken into account for inter-element interference, considering the possible recombination of C with O and N particles. The specialty of coal analysis by using laser-induced breakdown spectroscopy (LIBS) was also discussed and considered in the multivariate dominant factor construction. The proposed model achieved a much better prediction performance than conventional PLS. Compared with our previous, already improved dominant factor-based PLS model, the present PLS model obtained the same calibration quality while decreasing the root mean square error of prediction (RMSEP) from 4.47 to 3.77%. Furthermore, with the leave-one-out cross-validation and L-curve methods, which avoid the overfitting issue in determining the number of principal components instead of minimum RMSEP criteria, the present PLS model also showed better performance for different splits of calibration and prediction samples, proving the robustness of the present PLS model.
NASA Astrophysics Data System (ADS)
Foster, A. L.; Klofas, J. M.; Hein, J. R.; Koschinsky, A.; Bargar, J.; Dunham, R. E.; Conrad, T. A.
2011-12-01
Marine ferromanganese crusts and nodules ("Fe-Mn crusts") are considered a potential mineral resource due to their accumulation of several economically-important elements at concentrations above mean crustal abundances. They are typically composed of intergrown Fe oxyhydroxide and Mn oxide; thicker (older) crusts can also contain carbonate fluorapatite. We used X-ray absorption fine-structure (XAFS) spectroscopy, a molecular-scale structure probe, to determine the speciation of several elements (Te, Bi, Mo, Zr, Pt) in Fe-Mn crusts. As a first step in analysis of this dataset, we have conducted principal component analysis (PCA) of Te K-edge and Mo K-edge, k3-weighted XAFS spectra. The sample set consisted of 12 homogenized, ground Fe-Mn crust samples from 8 locations in the global ocean. One sample was subjected to a chemical leach to selectively remove Mn oxides and the elements associated with it. The samples in the study set contain 50-205 mg/kg Te (average = 88) and 97-802 mg/kg Mo (average = 567). PCAs of background-subtracted, normalized Te K-edge and Mo K-edge XAFS spectra were performed on a data matrix of 12 rows x 122 columns (rows = samples; columns = Te or Mo fluorescence value at each energy step) and results were visualized without rotation. The number of significant components was assessed by the Malinowski indicator function and ability of the components to reconstruct the features (minus noise) of all sample spectra. Two components were significant by these criteria for both Te and Mo PCAs and described a total of 74 and 75% of the total variance, respectively. Reconstruction of potential model compounds by the principal components derived from PCAs on the sample set ("target transformation") provides a means of ranking models in terms of their utility for subsequent linear-combination, least-squares (LCLS) fits (the next step of data analysis). Synthetic end-member models of Te4+, Te6+, and Mo adsorbed to Fe(III) oxyhydroxide and Mn oxide were
NASA Astrophysics Data System (ADS)
Ramoelo, A.; Skidmore, A. K.; Cho, M. A.; Mathieu, R.; Heitkönig, I. M. A.; Dudeni-Tlhone, N.; Schlerf, M.; Prins, H. H. T.
2013-08-01
Grass nitrogen (N) and phosphorus (P) concentrations are direct indicators of rangeland quality and provide imperative information for sound management of wildlife and livestock. It is challenging to estimate grass N and P concentrations using remote sensing in the savanna ecosystems. These areas are diverse and heterogeneous in soil and plant moisture, soil nutrients, grazing pressures, and human activities. The objective of the study is to test the performance of non-linear partial least squares regression (PLSR) for predicting grass N and P concentrations through integrating in situ hyperspectral remote sensing and environmental variables (climatic, edaphic and topographic). Data were collected along a land use gradient in the greater Kruger National Park region. The data consisted of: (i) in situ-measured hyperspectral spectra, (ii) environmental variables and measured grass N and P concentrations. The hyperspectral variables included published starch, N and protein spectral absorption features, red edge position, narrow-band indices such as simple ratio (SR) and normalized difference vegetation index (NDVI). The results of the non-linear PLSR were compared to those of conventional linear PLSR. Using non-linear PLSR, integrating in situ hyperspectral and environmental variables yielded the highest grass N and P estimation accuracy (R2 = 0.81, root mean square error (RMSE) = 0.08, and R2 = 0.80, RMSE = 0.03, respectively) as compared to using remote sensing variables only, and conventional PLSR. The study demonstrates the importance of an integrated modeling approach for estimating grass quality which is a crucial effort towards effective management and planning of protected and communal savanna ecosystems.
NASA Technical Reports Server (NTRS)
Periaux, J.
1979-01-01
The numerical simulation of the transonic flows of idealized fluids and of incompressible viscous fluids, by the nonlinear least squares methods is presented. The nonlinear equations, the boundary conditions, and the various constraints controlling the two types of flow are described. The standard iterative methods for solving a quasi elliptical nonlinear equation with partial derivatives are reviewed with emphasis placed on two examples: the fixed point method applied to the Gelder functional in the case of compressible subsonic flows and the Newton method used in the technique of decomposition of the lifting potential. The new abstract least squares method is discussed. It consists of substituting the nonlinear equation by a problem of minimization in a H to the minus 1 type Sobolev functional space.
ERIC Educational Resources Information Center
Pye, Cory C.; Mercer, Colin J.
2012-01-01
The symbolic algebra program Maple and the spreadsheet Microsoft Excel were used in an attempt to reproduce the Gaussian fits to a Slater-type orbital, required to construct the popular STO-NG basis sets. The successes and pitfalls encountered in such an approach are chronicled. (Contains 1 table and 3 figures.)
ERIC Educational Resources Information Center
Pye, Cory C.; Mercer, Colin J.
2012-01-01
The symbolic algebra program Maple and the spreadsheet Microsoft Excel were used in an attempt to reproduce the Gaussian fits to a Slater-type orbital, required to construct the popular STO-NG basis sets. The successes and pitfalls encountered in such an approach are chronicled. (Contains 1 table and 3 figures.)
A Weighted Least Squares Approach To Robustify Least Squares Estimates.
ERIC Educational Resources Information Center
Lin, Chowhong; Davenport, Ernest C., Jr.
This study developed a robust linear regression technique based on the idea of weighted least squares. In this technique, a subsample of the full data of interest is drawn, based on a measure of distance, and an initial set of regression coefficients is calculated. The rest of the data points are then taken into the subsample, one after another,…
Okamura, Kei; Kimoto, Hideshi; Kimoto, Takashi
2010-01-01
The open-cell titration of seawater was studied for alkalinity measurements by colorimetry. 1) The colorimetric pH of free hydrogen ion concentration, pH(F(ind)), was calculated from the ratio of the absorbances at 436 and 590 nm (R = (590nm)A/(436nm)A), along with the molar absorption coefficient ratios (e(1), e(2) and e(3)/e(2)) and a tentative acid dissociation constant value (pK(a(2))). 2) The perturbation of hydrogen ion was evaluated from the change in titration mass (Deltam). The total hydrogen ion concentration at m + Deltam, pH(T(at m+Deltam)), was calculated using pH(F(ind)) for a mass m and constants for sulfate (S(T)) and fluoride (F(T)). 3) The alkalinity (A(T)) was computed from the titrant mass (m + Deltam) and the corresponding pH(T(at m+Deltam)) through a non-linear least-squares approach using the pK(a(2)) value as a variable parameter. Seawater sample at 2000 m depth from the West Pacific was analyzed. The resulting A(T) (2420.92 +/- 3.35 micromol kg(-1)) was in good agreement with the A(T) measured by potentiometric electric force (2420.46 +/- 1.54 micromol kg(-1)). The resulting pK(a(2)) was 3.7037, in close proximity to that reported by King et al. (pK(a(2)) = 3.695).
Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V. E-mail: candler@aem.umn.edu; Truhlar, Donald G. E-mail: candler@aem.umn.edu
2014-02-07
Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with a review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.
Meloun, Milan; Bordovská, Sylva; Galla, Lubomír
2007-11-30
The mixed dissociation constants of four non-steroidal anti-inflammatory drugs (NSAIDs) ibuprofen, diclofenac sodium, flurbiprofen and ketoprofen at various ionic strengths I of range 0.003-0.155, and at temperatures of 25 degrees C and 37 degrees C, were determined with the use of two different multiwavelength and multivariate treatments of spectral data, SPECFIT/32 and SQUAD(84) nonlinear regression analyses and INDICES factor analysis. The factor analysis in the INDICES program predicts the correct number of components, and even the presence of minor ones, when the data quality is high and the instrumental error is known. The thermodynamic dissociation constant pK(a)(T) was estimated by nonlinear regression of (pK(a), I) data at 25 degrees C and 37 degrees C. Goodness-of-fit tests for various regression diagnostics enabled the reliability of the parameter estimates found to be proven. PALLAS, MARVIN, SPARC, ACD/pK(a) and Pharma Algorithms predict pK(a) being based on the structural formulae of drug compounds in agreement with the experimental value. The best agreement seems to be between the ACD/pK(a) program and experimentally found values and with SPARC. PALLAS and MARVIN predicted pK(a,pred) values with larger bias errors in comparison with the experimental value for all four drugs.
Least Squares Estimation Without Priors or Supervision
Raphan, Martin; Simoncelli, Eero P.
2011-01-01
Selection of an optimal estimator typically relies on either supervised training samples (pairs of measurements and their associated true values) or a prior probability model for the true values. Here, we consider the problem of obtaining a least squares estimator given a measurement process with known statistics (i.e., a likelihood function) and a set of unsupervised measurements, each arising from a corresponding true value drawn randomly from an unknown distribution. We develop a general expression for a nonparametric empirical Bayes least squares (NEBLS) estimator, which expresses the optimal least squares estimator in terms of the measurement density, with no explicit reference to the unknown (prior) density. We study the conditions under which such estimators exist and derive specific forms for a variety of different measurement processes. We further show that each of these NEBLS estimators may be used to express the mean squared estimation error as an expectation over the measurement density alone, thus generalizing Stein’s unbiased risk estimator (SURE), which provides such an expression for the additive gaussian noise case. This error expression may then be optimized over noisy measurement samples, in the absence of supervised training data, yielding a generalized SURE-optimized parametric least squares (SURE2PLS) estimator. In the special case of a linear parameterization (i.e., a sum of nonlinear kernel functions), the objective function is quadratic, and we derive an incremental form for learning this estimator from data. We also show that combining the NEBLS form with its corresponding generalized SURE expression produces a generalization of the score-matching procedure for parametric density estimation. Finally, we have implemented several examples of such estimators, and we show that their performance is comparable to their optimal Bayesian or supervised regression counterparts for moderate to large amounts of data. PMID:21105827
Using Weighted Least Squares Regression for Obtaining Langmuir Sorption Constants
USDA-ARS?s Scientific Manuscript database
One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...
Tensor hypercontraction. II. Least-squares renormalization
NASA Astrophysics Data System (ADS)
Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David
2012-12-01
The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)], 10.1063/1.4732310. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1/r12 operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N^5) effort if exact integrals are decomposed, or O(N^4) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N^4) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry.
Götterdämmerung over total least squares
NASA Astrophysics Data System (ADS)
Malissiovas, G.; Neitzel, F.; Petrovic, S.
2016-06-01
The traditional way of solving non-linear least squares (LS) problems in Geodesy includes a linearization of the functional model and iterative solution of a nonlinear equation system. Direct solutions for a class of nonlinear adjustment problems have been presented by the mathematical community since the 1980s, based on total least squares (TLS) algorithms and involving the use of singular value decomposition (SVD). However, direct LS solutions for this class of problems have been developed in the past also by geodesists. In this contributionwe attempt to establish a systematic approach for direct solutions of non-linear LS problems from a "geodetic" point of view. Therefore, four non-linear adjustment problems are investigated: the fit of a straight line to given points in 2D and in 3D, the fit of a plane in 3D and the 2D symmetric similarity transformation of coordinates. For all these problems a direct LS solution is derived using the same methodology by transforming the problem to the solution of a quadratic or cubic algebraic equation. Furthermore, by applying TLS all these four problems can be transformed to solving the respective characteristic eigenvalue equations. It is demonstrated that the algebraic equations obtained in this way are identical with those resulting from the LS approach. As a by-product of this research two novel approaches are presented for the TLS solutions of fitting a straight line to 3D and the 2D similarity transformation of coordinates. The derived direct solutions of the four considered problems are illustrated on examples from the literature and also numerically compared to published iterative solutions.
Tzeng, Yang-Sheng; Mansour, Joey; Handler, Zachary; Gereige, Jessica; Shah, Niral; Zhou, Xin; Albert, Mitchell
2010-01-01
Hyperpolarized (HP) 3He MRI is an emerging tool in the diagnosis and evaluation of pulmonary diseases involving bronchoconstriction, such as asthma. Previously, airway diameters from dynamic HP 3He MR images of the lung were assessed manually and subjectively, and were thus prone to uncertainties associated with human error and partial volume effects. A model-based algorithm capable of fully utilizing pixel intensity profile information and attaining subpixel resolution has been developed to measure surrogate airway diameters from HP 3He MR static projection images of plastic tubes. This goal was achieved by fitting ideal pixel intensity profiles for various diameter (6.4 to 19.1 mm) circular tubes to actual pixel intensity data. A phantom was constructed from plastic tubes of various diameters connected in series and filled with water mixed with contrast agent. Projection MR images were then taken of the phantom. The favorable performance of the model-based algorithm compared to manual assessment demonstrates the viability of our approach. The manual and algorithm approaches yielded diameter measurements that generally stayed within 1× the pixel dimension. However, inconsistency of the manual approach can be observed from the larger standard deviations of its measured values. The method was then extended to HP 3He MRI, producing encouraging results at tube diameters characteristic of airways beyond the second generation, thereby justifying their application to lung airway imaging and measurement. Potential obstacles when measuring airway diameters using this method are discussed. PMID:16872072
Understanding Least Squares through Monte Carlo Calculations
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2005-01-01
The method of least squares (LS) is considered as an important data analysis tool available to physical scientists. The mathematics of linear least squares(LLS) is summarized in a very compact matrix rotation that renders it practically "formulaic".
Understanding Least Squares through Monte Carlo Calculations
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2005-01-01
The method of least squares (LS) is considered as an important data analysis tool available to physical scientists. The mathematics of linear least squares(LLS) is summarized in a very compact matrix rotation that renders it practically "formulaic".
Source Localization using Stochastic Approximation and Least Squares Methods
Sahyoun, Samir S.; Djouadi, Seddik M.; Qi, Hairong; Drira, Anis
2009-03-05
This paper presents two approaches to locate the source of a chemical plume; Nonlinear Least Squares and Stochastic Approximation (SA) algorithms. Concentration levels of the chemical measured by special sensors are used to locate this source. Non-linear Least Squares technique is applied at different noise levels and compared with the localization using SA. For a noise corrupted data collected from a distributed set of chemical sensors, we show that SA methods are more efficient than Least Squares method. SA methods are often better at coping with noisy input information than other search methods.
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
Augmented classical least squares multivariate spectral analysis
Haaland, David M.; Melgaard, David K.
2004-02-03
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-07-26
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Haaland, David M.; Melgaard, David K.
2005-01-11
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
NASA Astrophysics Data System (ADS)
Satija, Aaditya; Caers, Jef
2015-03-01
Inverse modeling is widely used to assist with forecasting problems in the subsurface. However, full inverse modeling can be time-consuming requiring iteration over a high dimensional parameter space with computationally expensive forward models and complex spatial priors. In this paper, we investigate a prediction-focused approach (PFA) that aims at building a statistical relationship between data variables and forecast variables, avoiding the inversion of model parameters altogether. The statistical relationship is built by first applying the forward model related to the data variables and the forward model related to the prediction variables on a limited set of spatial prior models realizations, typically generated through geostatistical methods. The relationship observed between data and prediction is highly non-linear for many forecasting problems in the subsurface. In this paper we propose a Canonical Functional Component Analysis (CFCA) to map the data and forecast variables into a low-dimensional space where, if successful, the relationship is linear. CFCA consists of (1) functional principal component analysis (FPCA) for dimension reduction of time-series data and (2) canonical correlation analysis (CCA); the latter aiming to establish a linear relationship between data and forecast components. If such mapping is successful, then we illustrate with several cases that (1) simple regression techniques with a multi-Gaussian framework can be used to directly quantify uncertainty on the forecast without any model inversion and that (2) such uncertainty is a good approximation of uncertainty obtained from full posterior sampling with rejection sampling.
NASA Astrophysics Data System (ADS)
Satija, A.; Caers, J.
2014-12-01
Hydrogeological forecasting problems, like many subsurface forecasting problems, often suffer from the scarcity of reliable data yet complex prior information about the underlying earth system. Assimilating and integrating this information into an earth model requires using iterative parameter space exploration techniques or Monte Carlo Markov Chain techniques. Since such an earth model needs to account for many large and small scale features of the underlying system, as the system gets larger, iterative modeling can become computationally prohibitive, in particular when the forward model would allow for only a few hundred model evaluations. In addition, most modeling methods do not include the purpose for which inverse method are built, namely, the actual forecast and usually focus only on data and model. In this study, we present a technique to extract features of the earth system informed by time-varying dynamic data (data features) and those that inform a time-varying forecasting variable (forecast features) using Functional Principal Component Analysis. Canonical Coefficient Analysis is then used to examine the relationship between these features using a linear model. When this relationship suggests that the available data informs the required forecast, a simple linear regression can be used on the linear model to directly estimate the posterior of the forecasting problem, without any iterative inversion of model parameters. This idea and method is illustrated using an example of contaminant flow in an aquifer with complex prior, large dimension and non-linear flow & transport model.
Generalized adjustment by least squares ( GALS).
Elassal, A.A.
1983-01-01
The least-squares principle is universally accepted as the basis for adjustment procedures in the allied fields of geodesy, photogrammetry and surveying. A prototype software package for Generalized Adjustment by Least Squares (GALS) is described. The package is designed to perform all least-squares-related functions in a typical adjustment program. GALS is capable of supporting development of adjustment programs of any size or degree of complexity. -Author
Collinearity in Least-Squares Analysis
ERIC Educational Resources Information Center
de Levie, Robert
2012-01-01
How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…
Collinearity in Least-Squares Analysis
ERIC Educational Resources Information Center
de Levie, Robert
2012-01-01
How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…
Yokoi, F.; Wong, D.F.; Marenco, S.
1994-05-01
(C11)WIN 35,428 was evaluated as a specific radioligand for the dopamine re-uptake site in brain by PET scanning. Twenty mCi of [C11]WIN 35,428 was administered IV to 8 normal volunteers (19-81 y.o.). Fifty dynamic PET scan images were acquired with blood sampling, for 90 min after [C11]WIN 35,428 injection. Four different kinetic modeling procedures were used to estimate the binding potential, k3/k4. The k3 and k4 parameters reflect the rate constants for binding to and dissociation from receptors, respectively. The first model is a three-compartment model which is a standard non-linear least-squares analysis with constraint of the k1/k2 ratio. This consists of a plasma, ligand free and a specifically bound compartment. The k1/k2 ratio was estimated with a two-compartment model using plasma and cerebellar data. The other three models are two-compartment models. This consists of a plasma and a brain (cerebellum or striatum) compartment. The second model is a two-compartment graphical method of Gjedde. The distribution volume of [C11]WIN in striatum and cerebellum was plotted as an intercept. The third model approach is the graphical method of Logan, which was a modification of the Gjedde approach. The distribution volume of [C11]WIN in striatum and cerebellum was plotted as a slope. The fourth model consisted of a direct fit a f cerebellum and striatum without plasma input. The mean value and standard deviation of k3/k4 in each model was 4.38 {plus_minus} 0.81, 3.93 {plus_minus} 0.98, 4.15 {plus_minus} 0.83, and 6.90 {plus_minus} 2.19, respectively. There is a significant difference between k3/k4 values of the constraint method and that of the last method. There is no significant difference between the k3/k4 values of the constraint method and the other two methods.
Weighted conditional least-squares estimation
Booth, J.G.
1987-01-01
A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered.
A novel extended kernel recursive least squares algorithm.
Zhu, Pingping; Chen, Badong; Príncipe, José C
2012-08-01
In this paper, a novel extended kernel recursive least squares algorithm is proposed combining the kernel recursive least squares algorithm and the Kalman filter or its extensions to estimate or predict signals. Unlike the extended kernel recursive least squares (Ex-KRLS) algorithm proposed by Liu, the state model of our algorithm is still constructed in the original state space and the hidden state is estimated using the Kalman filter. The measurement model used in hidden state estimation is learned by the kernel recursive least squares algorithm (KRLS) in reproducing kernel Hilbert space (RKHS). The novel algorithm has more flexible state and noise models. We apply this algorithm to vehicle tracking and the nonlinear Rayleigh fading channel tracking, and compare the tracking performances with other existing algorithms.
ERIC Educational Resources Information Center
Alexander, John W., Jr.; Rosenberg, Nancy S.
This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…
Spacecraft inertia estimation via constrained least squares
NASA Technical Reports Server (NTRS)
Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.
2006-01-01
This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.
Spacecraft inertia estimation via constrained least squares
NASA Technical Reports Server (NTRS)
Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.
2006-01-01
This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.
A spectral mimetic least-squares method
Bochev, Pavel; Gerritsma, Marc
2014-09-01
We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are alsomore » satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.« less
A spectral mimetic least-squares method
Bochev, Pavel; Gerritsma, Marc
2014-09-01
We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are also satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.
Review of the Generalized Least Squares Method
NASA Astrophysics Data System (ADS)
Menke, William
2014-09-01
The generalized least squares (GLS) method uses both data and prior information to solve for a best-fitting set of model parameters. We review the method and present simplified derivations of its essential formulas. Concepts of resolution and covariance—essential in all of inverse theory—are applicable to GLS, but their meaning, and especially that of resolution, must be carefully interpreted. We introduce derivations that show that the quantity being resolved is the deviation of the solution from the prior model and that the covariance of the model depends on both the uncertainty in the data and the uncertainty in the prior information. On face value, the GLS formulas for resolution and covariance seem to require matrix inverses that may be difficult to calculate for the very large (but often sparse) linear systems encountered in practical inverse problems. We demonstrate how to organize the computations in an efficient manner and present MATLAB code that implements them. Finally, we formulate the well-understood problem of interpolating data with minimum curvature splines as an inverse problem and use it to illustrate the GLS method.
Review of the Generalized Least Squares Method
NASA Astrophysics Data System (ADS)
Menke, William
2015-01-01
The generalized least squares (GLS) method uses both data and prior information to solve for a best-fitting set of model parameters. We review the method and present simplified derivations of its essential formulas. Concepts of resolution and covariance—essential in all of inverse theory—are applicable to GLS, but their meaning, and especially that of resolution, must be carefully interpreted. We introduce derivations that show that the quantity being resolved is the deviation of the solution from the prior model and that the covariance of the model depends on both the uncertainty in the data and the uncertainty in the prior information. On face value, the GLS formulas for resolution and covariance seem to require matrix inverses that may be difficult to calculate for the very large (but often sparse) linear systems encountered in practical inverse problems. We demonstrate how to organize the computations in an efficient manner and present MATLAB code that implements them. Finally, we formulate the well-understood problem of interpolating data with minimum curvature splines as an inverse problem and use it to illustrate the GLS method.
Linear Least Squares for Correlated Data
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1988-01-01
Throughout the literature authors have consistently discussed the suspicion that regression results were less than satisfactory when the independent variables were correlated. Camm, Gulledge, and Womer, and Womer and Marcotte provide excellent applied examples of these concerns. Many authors have obtained partial solutions for this problem as discussed by Womer and Marcotte and Wonnacott and Wonnacott, which result in generalized least squares algorithms to solve restrictive cases. This paper presents a simple but relatively general multivariate method for obtaining linear least squares coefficients which are free of the statistical distortion created by correlated independent variables.
Least Squares Time-Series Synchronization in Image Acquisition Systems.
Piazzo, Lorenzo; Raguso, Maria Carmela; Calzoletti, Luca; Seu, Roberto; Altieri, Bruno
2016-07-18
We consider an acquisition system constituted by an array of sensors scanning an image. Each sensor produces a sequence of readouts, called a time-series. In this framework, we discuss the image estimation problem when the time-series are affected by noise and by a time shift. In particular, we introduce an appropriate data model and consider the Least Squares (LS) estimate, showing that it has no closed form. However, the LS problem has a structure that can be exploited to simplify the solution. In particular, based on two known techniques, namely Separable Nonlinear Least Squares (SNLS) and Alternating Least Squares (ALS), we propose and analyze several practical estimation methods. As an additional contribution, we discuss the application of these methods to the data of the Photodetector Array Camera and Spectrometer (PACS), which is an infrared photometer onboard the Herschel satellite. In this context, we investigate the accuracy and the computational complexity of the methods, using both true and simulated data.
Using Least Squares to Solve Systems of Equations
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2016-01-01
The method of least squares (LS) yields exact solutions for the adjustable parameters when the number of data values n equals the number of parameters "p". This holds also when the fit model consists of "m" different equations and "m = p", which means that LS algorithms can be used to obtain solutions to systems of…
Using Least Squares to Solve Systems of Equations
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2016-01-01
The method of least squares (LS) yields exact solutions for the adjustable parameters when the number of data values n equals the number of parameters "p". This holds also when the fit model consists of "m" different equations and "m = p", which means that LS algorithms can be used to obtain solutions to systems of…
Least squares reverse time migration of controlled order multiples
NASA Astrophysics Data System (ADS)
Liu, Y.
2016-12-01
Imaging using the reverse time migration of multiples generates inherent crosstalk artifacts due to the interference among different order multiples. Traditionally, least-square fitting has been used to address this issue by seeking the best objective function to measure the amplitude differences between the predicted and observed data. We have developed an alternative objective function by decomposing multiples into different orders to minimize the difference between Born modeling predicted multiples and specific-order multiples from observational data in order to attenuate the crosstalk. This method is denoted as the least-squares reverse time migration of controlled order multiples (LSRTM-CM). Our numerical examples demonstrated that the LSRTM-CM can significantly improve image quality compared with reverse time migration of multiples and least-square reverse time migration of multiples. Acknowledgments This research was funded by the National Nature Science Foundation of China (Grant Nos. 41430321 and 41374138).
Iterative methods for weighted least-squares
Bobrovnikova, E.Y.; Vavasis, S.A.
1996-12-31
A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.
Very Large Least Squares Problems and Supercomputers,
1984-12-31
structures) Purdue University Ahmed Sameh (supercomputers) University of Illinois 6. REFERENCES Abad-Zapatero, C., Abdel-Meguid, S.S., Johnson, J.E...pp. 784-811. Sameh , A., Solving the linear least squares problem on a linear array of proces- sors, In: Purdue Workshop on Algorithmically
Least squares estimation of avian molt rates
Johnson, D.H.
1989-01-01
A straightforward least squares method of estimating the rate at which birds molt feathers is presented, suitable for birds captured more than once during the period of molt. The date of molt onset can also be estimated. The method is applied to male and female mourning doves.
A Limitation with Least Squares Predictions
ERIC Educational Resources Information Center
Bittner, Teresa L.
2013-01-01
Although researchers have documented that some data make larger contributions than others to predictions made with least squares models, it is relatively unknown that some data actually make no contribution to the predictions produced by these models. This article explores such noncontributory data. (Contains 1 table and 2 figures.)
Vasanawala, Shreyas S; Yu, Huanzhou; Shimakawa, Ann; Jeng, Michael; Brittain, Jean H
2012-01-01
MRI imaging of hepatic iron overload can be achieved by estimating T(2) values using multiple-echo sequences. The purpose of this work is to develop and clinically evaluate a weighted least squares algorithm based on T(2) Iterative Decomposition of water and fat with Echo Asymmetry and Least-squares estimation (IDEAL) technique for volumetric estimation of hepatic T(2) in the setting of iron overload. The weighted least squares T(2) IDEAL technique improves T(2) estimation by automatically decreasing the impact of later, noise-dominated echoes. The technique was evaluated in 37 patients with iron overload. Each patient underwent (i) a standard 2D multiple-echo gradient echo sequence for T(2) assessment with nonlinear exponential fitting, and (ii) a 3D T(2) IDEAL technique, with and without a weighted least squares fit. Regression and Bland-Altman analysis demonstrated strong correlation between conventional 2D and T(2) IDEAL estimation. In cases of severe iron overload, T(2) IDEAL without weighted least squares reconstruction resulted in a relative overestimation of T(2) compared with weighted least squares.
Hybrid least squares multivariate spectral analysis methods
Haaland, David M.
2002-01-01
A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.
Least Squares Moving-Window Spectral Analysis.
Lee, Young Jong
2017-01-01
Least squares regression is proposed as a moving-windows method for analysis of a series of spectra acquired as a function of external perturbation. The least squares moving-window (LSMW) method can be considered an extended form of the Savitzky-Golay differentiation for nonuniform perturbation spacing. LSMW is characterized in terms of moving-window size, perturbation spacing type, and intensity noise. Simulation results from LSMW are compared with results from other numerical differentiation methods, such as single-interval differentiation, autocorrelation moving-window, and perturbation correlation moving-window methods. It is demonstrated that this simple LSMW method can be useful for quantitative analysis of nonuniformly spaced spectral data with high frequency noise.
Least squares restoration of multichannel images
NASA Technical Reports Server (NTRS)
Galatsanos, Nikolas P.; Katsaggelos, Aggelos K.; Chin, Roland T.; Hillery, Allen D.
1991-01-01
Multichannel restoration using both within- and between-channel deterministic information is considered. A multichannel image is a set of image planes that exhibit cross-plane similarity. Existing optimal restoration filters for single-plane images yield suboptimal results when applied to multichannel images, since between-channel information is not utilized. Multichannel least squares restoration filters are developed using the set theoretic and the constrained optimization approaches. A geometric interpretation of the estimates of both filters is given. Color images (three-channel imagery with red, green, and blue components) are considered. Constraints that capture the within- and between-channel properties of color images are developed. Issues associated with the computation of the two estimates are addressed. A spatially adaptive, multichannel least squares filter that utilizes local within- and between-channel image properties is proposed. Experiments using color images are described.
Meshless Galerkin least-squares method
NASA Astrophysics Data System (ADS)
Pan, X. F.; Zhang, X.; Lu, M. W.
2005-02-01
Collocation method and Galerkin method have been dominant in the existing meshless methods. Galerkin-based meshless methods are computational intensive, whereas collocation-based meshless methods suffer from instability. A new efficient meshless method, meshless Galerkin lest-squares method (MGLS), is proposed in this paper to combine the advantages of Galerkin method and collocation method. The problem domain is divided into two subdomains, the interior domain and boundary domain. Galerkin method is applied in the boundary domain, whereas the least-squares method is applied in the interior domain.The proposed scheme elliminates the posibilities of spurious solutions as that in the least-square method if an incorrect boundary conditions are used. To investigate the accuracy and efficiency of the proposed method, a cantilevered beam and an infinite plate with a central circular hole are analyzed in detail and numerical results are compared with those obtained by Galerkin-based meshless method (GBMM), collocation-based meshless method (CBMM) and meshless weighted least squares method (MWLS). Numerical studies show that the accuracy of the proposed MGLS is much higher than that of CBMM and is close to, even better than, that of GBMM, while the computational cost is much less than that of GBMM.
Bootstrapping least-squares estimates in biochemical reaction networks.
Linder, Daniel F; Rempała, Grzegorz A
2015-01-01
The paper proposes new computational methods of computing confidence bounds for the least-squares estimates (LSEs) of rate constants in mass action biochemical reaction network and stochastic epidemic models. Such LSEs are obtained by fitting the set of deterministic ordinary differential equations (ODEs), corresponding to the large-volume limit of a reaction network, to network's partially observed trajectory treated as a continuous-time, pure jump Markov process. In the large-volume limit the LSEs are asymptotically Gaussian, but their limiting covariance structure is complicated since it is described by a set of nonlinear ODEs which are often ill-conditioned and numerically unstable. The current paper considers two bootstrap Monte-Carlo procedures, based on the diffusion and linear noise approximations for pure jump processes, which allow one to avoid solving the limiting covariance ODEs. The results are illustrated with both in-silico and real data examples from the LINE 1 gene retrotranscription model and compared with those obtained using other methods.
Iterative least squares functional networks classifier.
El-Sebakhy, Emad A; Hadi, Ali S; Faisal, Kanaan A
2007-05-01
This paper proposes unconstrained functional networks as a new classifier to deal with the pattern recognition problems. Both methodology and learning algorithm for this kind of computational intelligence classifier using the iterative least squares optimization criterion are derived. The performance of this new intelligent systems scheme is demonstrated and examined using real-world applications. A comparative study with the most common classification algorithms in both machine learning and statistics communities is carried out. The study was achieved with only sets of second-order linearly independent polynomial functions to approximate the neuron functions. The results show that this new framework classifier is reliable, flexible, stable, and achieves a high-quality performance.
Fast Algorithms for Structured Least Squares and Total Least Squares Problems
Kalsi, Anoop; O’Leary, Dianne P.
2006-01-01
We consider the problem of solving least squares problems involving a matrix M of small displacement rank with respect to two matrices Z1 and Z2. We develop formulas for the generators of the matrix M HM in terms of the generators of M and show that the Cholesky factorization of the matrix M HM can be computed quickly if Z1 is close to unitary and Z2 is triangular and nilpotent. These conditions are satisfied for several classes of matrices, including Toeplitz, block Toeplitz, Hankel, and block Hankel, and for matrices whose blocks have such structure. Fast Cholesky factorization enables fast solution of least squares problems, total least squares problems, and regularized total least squares problems involving these classes of matrices. PMID:27274922
Fast Algorithms for Structured Least Squares and Total Least Squares Problems.
Kalsi, Anoop; O'Leary, Dianne P
2006-01-01
We consider the problem of solving least squares problems involving a matrix M of small displacement rank with respect to two matrices Z 1 and Z 2. We develop formulas for the generators of the matrix M (H) M in terms of the generators of M and show that the Cholesky factorization of the matrix M (H) M can be computed quickly if Z 1 is close to unitary and Z 2 is triangular and nilpotent. These conditions are satisfied for several classes of matrices, including Toeplitz, block Toeplitz, Hankel, and block Hankel, and for matrices whose blocks have such structure. Fast Cholesky factorization enables fast solution of least squares problems, total least squares problems, and regularized total least squares problems involving these classes of matrices.
Source allocation by least-squares hydrocarbon fingerprint matching.
Burns, William A; Mudge, Stephen M; Bence, A Edward; Boehm, Paul D; Brown, John S; Page, David S; Parker, Keith R
2006-11-01
There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS.
Source allocation by least-squares hydrocarbon fingerprint matching
William A. Burns; Stephen M. Mudge; A. Edward Bence; Paul D. Boehm; John S. Brown; David S. Page; Keith R. Parker
2006-11-01
There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS. 15 refs., 5 figs.
Total least squares for anomalous change detection
Theiler, James P; Matsekh, Anna M
2010-01-01
A family of difference-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQ-based anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and furthermore it is shown to be equivalent to the optimized covariance equalization algorithm. What whitened TLSQ offers, in addition to connecting with a common language the derivations of two of the most popular anomalous change detection algorithms - chronochrome and covariance equalization - is a generalization of these algorithms with the potential for better performance.
Least-squares Gaussian beam migration
NASA Astrophysics Data System (ADS)
Yuan, Maolin; Huang, Jianping; Liao, Wenyuan; Jiang, Fuyou
2017-02-01
A theory of least-squares Gaussian beam migration (LSGBM) is presented to optimally estimate a subsurface reflectivity. In the iterative inversion scheme, a Gaussian beam (GB) propagator is used as the kernel of linearized forward modeling (demigration) and its adjoint (migration). Born approximation based GB demigration relies on the calculation of Green’s function by a Gaussian-beam summation for the downward and upward wavefields. The adjoint operator of GB demigration accounts for GB prestack depth migration under the cross-correlation imaging condition, where seismic traces are processed one by one for each shot. A numerical test on the point diffractors model suggests that GB demigration can successfully simulate primary scattered data, while migration (adjoint) can yield a corresponding image. The GB demigration/migration algorithms are used for the least-squares migration scheme to deblur conventional migrated images. The proposed LSGBM is illustrated with two synthetic data for a four-layer model and the Marmousi2 model. Numerical results show that LSGBM, compared to migration (adjoint) with GBs, produces images with more balanced amplitude, higher resolution and even fewer artifacts. Additionally, the LSGBM shows a robust convergence rate.
Fund trend prediction based on least squares support vector regression
NASA Astrophysics Data System (ADS)
Bao, Yilan
2011-12-01
It is well-known that accurate prediction of fund trend is very important to get high profits from fund market. In the paper, least squares support vector regression (LSSVR) is adopted to predict fund trend. LSSVR higher the non-linear prediction ability than other prediction methods .The trading price of fund "kexun" from 2007-3-1 to 2007-3-30 is used as our experimental data, and the trading price from 2007-3-26 to 2007-3-30 is used as the testing data. The forecasting results of BP neural network and least squares support vector regression are given. The experimental results show that the forecasting values of LSSVR are nearer to actual values that those of BP neural network.
Vehicle detection using partial least squares.
Kembhavi, Aniruddha; Harwood, David; Davis, Larry S
2011-06-01
Detecting vehicles in aerial images has a wide range of applications, from urban planning to visual surveillance. We describe a vehicle detector that improves upon previous approaches by incorporating a very large and rich set of image descriptors. A new feature set called Color Probability Maps is used to capture the color statistics of vehicles and their surroundings, along with the Histograms of Oriented Gradients feature and a simple yet powerful image descriptor that captures the structural characteristics of objects named Pairs of Pixels. The combination of these features leads to an extremely high-dimensional feature set (approximately 70,000 elements). Partial Least Squares is first used to project the data onto a much lower dimensional sub-space. Then, a powerful feature selection analysis is employed to improve the performance while vastly reducing the number of features that must be calculated. We compare our system to previous approaches on two challenging data sets and show superior performance.
Hybrid least squares multivariate spectral analysis methods
Haaland, David M.
2004-03-23
A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.
Object tracking via partial least squares analysis.
Wang, Qing; Chen, Feng; Xu, Wenli; Yang, Ming-Hsuan
2012-10-01
We propose an object tracking algorithm that learns a set of appearance models for adaptive discriminative object representation. In this paper, object tracking is posed as a binary classification problem in which the correlation of object appearance and class labels from foreground and background is modeled by partial least squares (PLS) analysis, for generating a low-dimensional discriminative feature subspace. As object appearance is temporally correlated and likely to repeat over time, we learn and adapt multiple appearance models with PLS analysis for robust tracking. The proposed algorithm exploits both the ground truth appearance information of the target labeled in the first frame and the image observations obtained online, thereby alleviating the tracking drift problem caused by model update. Experiments on numerous challenging sequences and comparisons to state-of-the-art methods demonstrate favorable performance of the proposed tracking algorithm.
Classical least squares multivariate spectral analysis
Haaland, David M.
2002-01-01
An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.
Flexible least squares for approximately linear systems
NASA Astrophysics Data System (ADS)
Kalaba, Robert; Tesfatsion, Leigh
1990-10-01
A probability-free multicriteria approach is presented to the problem of filtering and smoothing when prior beliefs concerning dynamics and measurements take an approximately linear form. Consideration is given to applications in the social and biological sciences, where obtaining agreement among researchers regarding probability relations for discrepancy terms is difficult. The essence of the proposed flexible-least-squares (FLS) procedure is the cost-efficient frontier, a curve in a two-dimensional cost plane which provides an explicit and systematic way to determine the efficient trade-offs between the separate costs incurred for dynamic and measurement specification errors. The FLS estimates show how the state vector could have evolved over time in a manner minimally incompatible with the prior dynamic and measurement specifications. A FORTRAN program for implementing the FLS filtering and smoothing procedure for approximately linear systems is provided.
Colorimetric characterization of LCD based on constrained least squares
NASA Astrophysics Data System (ADS)
LI, Tong; Xie, Kai; Wang, Qiaojie; Yao, Luyang
2017-01-01
In order to improve the accuracy of colorimetric characterization of liquid crystal display, tone matrix model in color management modeling of display characterization is established by using constrained least squares for quadratic polynomial fitting, and find the relationship between the RGB color space to CIEXYZ color space; 51 sets of training samples were collected to solve the parameters, and the accuracy of color space mapping model was verified by 100 groups of random verification samples. The experimental results showed that, with the constrained least square method, the accuracy of color mapping was high, the maximum color difference of this model is 3.8895, the average color difference is 1.6689, which prove that the method has better optimization effect on the colorimetric characterization of liquid crystal display.
Least-Squares Self-Calibration of Imaging Array Data
NASA Technical Reports Server (NTRS)
Arendt, R. G.; Moseley, S. H.; Fixsen, D. J.
2004-01-01
When arrays are used to collect multiple appropriately-dithered images of the same region of sky, the resulting data set can be calibrated using a least-squares minimization procedure that determines the optimal fit between the data and a model of that data. The model parameters include the desired sky intensities as well as instrument parameters such as pixel-to-pixel gains and offsets. The least-squares solution simultaneously provides the formal error estimates for the model parameters. With a suitable observing strategy, the need for separate calibration observations is reduced or eliminated. We show examples of this calibration technique applied to HST NICMOS observations of the Hubble Deep Fields and simulated SIRTF IRAC observations.
Cichocki, A; Unbehauen, R
1994-01-01
In this paper a new class of simplified low-cost analog artificial neural networks with on chip adaptive learning algorithms are proposed for solving linear systems of algebraic equations in real time. The proposed learning algorithms for linear least squares (LS), total least squares (TLS) and data least squares (DLS) problems can be considered as modifications and extensions of well known algorithms: the row-action projection-Kaczmarz algorithm and/or the LMS (Adaline) Widrow-Hoff algorithms. The algorithms can be applied to any problem which can be formulated as a linear regression problem. The correctness and high performance of the proposed neural networks are illustrated by extensive computer simulation results.
On the Significance of Properly Weighting Sorption Data for Least Squares Analysis
USDA-ARS?s Scientific Manuscript database
One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...
Constrained least-squares regression in color spaces
NASA Astrophysics Data System (ADS)
Finlayson, Graham D.; Drew, Mark S.
1997-10-01
To characterize color values measured by color devices (e.g., scanners, color copiers, and color cameras) in a device-independent fashion these values must be transformed to colorimetric tristimulus values. The measured RGB 3- vectors are not a linear transformation away from such colorimetric vectors, however, but still the best transformation between these two data sets, or between RGB values measured under different illuminants, can easily be determined. Two well-known methods for determining this transformation are the simple least-squares fit procedure and Vrhel's principal component method. The former approach makes no a priori statement about which colors will be mapped well and which will be mapped poorly. Depending on the data set a white reflectance may be mapped accurately or inaccurately. In contrast, the principal component method solves for the transform that exactly maps a particular set of basis surfaces between illuminants (where the basis is usually designed to capture the statistics of a set of spectral reflectance data) and hence some statement can be made about which colors will be mapped without error. Unfortunately, even if the basis set fits real reflectances well this does not guarantee good color correction. Here we propose a new, compromise, constrained regression method based on finding the mapping which maps a single (or possibly two) basis surface(s) without error and, subject to this constraint, also minimizes the sum of squared differences between the mapped RGB data and corresponding XYZ tristimuli values. The constrained regression is particularly useful either when it is crucial to map a particular color with great accuracy or when there is incomplete calibration data. For example, it is generally desirable that the device coordinates for a white reflectance should always map exactly to the XYZ tristimulus white. Surprisingly, we show that when no statistics about reflectances are known then a white-point preserving mapping
Optimizing Complex Kinetics Experiments Using Least-Squares Methods
Fahr, A.; Braun, W.; Kurylo, M. J.
1993-01-01
Complex kinetic problems are generally modeled employing numerical integration routines. Our kinetics modeling program, Acuchem, has been modified to fit rate constants and absorption coefficients generically to real or synthesized “laboratory data” via a least-squares iterative procedure written for personal computers. To test the model and method of analysis the self- and cross-combination reactions of HO2 and CH3O2 radicals of importance in atmospheric chemistry are examined. These radicals as well as other species absorb ultraviolet radiation. The resultant absorption signal is measured in the laboratory and compared with a modeled signal to obtain the best-fit to various kinetic parameters. The modified program generates synthetic data with added random noise. An analysis of the synthetic data leads to an optimization of the experimental design and best-values for certain rate constants and absorption coefficients. PMID:28053465
Local validation of EU-DEM using Least Squares Collocation
NASA Astrophysics Data System (ADS)
Ampatzidis, Dimitrios; Mouratidis, Antonios; Gruber, Christian; Kampouris, Vassilios
2016-04-01
In the present study we are dealing with the evaluation of the European Digital Elevation Model (EU-DEM) in a limited area, covering few kilometers. We compare EU-DEM derived vertical information against orthometric heights obtained by classical trigonometric leveling for an area located in Northern Greece. We apply several statistical tests and we initially fit a surface model, in order to quantify the existing biases and outliers. Finally, we implement a methodology for orthometric heights prognosis, using the Least Squares Collocation for the remaining residuals of the first step (after the fitted surface application). Our results, taking into account cross validation points, reveal a local consistency between EU-DEM and official heights, which is better than 1.4 meters.
Meloun, Milan; Bordovská, Sylva; Vrána, Ales
2007-02-19
The mixed dissociation constants of four anticancer drugs--camptothecine, 7-ethyl-10-hydroxycamptothecine, 10-hydroxycamptothecine and 7-ethylcamptothecine, including diprotic and triprotic molecules at various ionic strengths I of range 0.01 and 0.4, and at temperatures of 25 and 37 degrees C--were determined with the use of two different multiwavelength and multivariate treatments of spectral data, SPECFIT32 and SQUAD(84) nonlinear regression analyses and INDICES factor analysis. A proposed strategy for dissociation constants determination is presented on the acid-base equilibria of camptothecine. Indices of precise modifications of the factor analysis in the program INDICES predict the correct number of components, and even the presence of minor ones, when the data quality is high and the instrumental error is known. The thermodynamic dissociation constant pK(a)(T) was estimated by nonlinear regression of {pK(a), I} data at 25 and 37 degrees C: for camptothecine pK(a,1)(T)=2.90(7) and 3.02(8), pK(a,2)(T)=10.18(30) and 10.23(8); for 7-ethyl-10-hydroxycamptothecine, pK(a,1)(T)=3.11(2) and 2.46(6), pK(a,2)(T)=8.91(4) and 8.74(3), pK(a,3)(T)=9.70(3) and 9.47(8); for 10-hydroxycamptothecine pK(a,1)(T)=2.93(4) and 2.84(5), pK(a,2)(T)=8.93(2) and 8.92(2), pK(a,3)(T)=9.45(10) and 9.98(4); and for 7-ethylcamptothecine pK(a,1)(T)=3.10(4) and 3.30(16), pK(a,2)(T)=9.94(9) and 10.98(18). Goodness-of-fit tests for various regression diagnostics enabled the reliability of the parameter estimates found to be proven. Pallas and Marvin predict pK(a) being based on the structural formulae of drug compounds in agreement with the experimental value.
Estimating parameter of influenza transmission using regularized least square
NASA Astrophysics Data System (ADS)
Nuraini, N.; Syukriah, Y.; Indratno, S. W.
2014-02-01
Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.
Evaluation of fuzzy inference systems using fuzzy least squares
NASA Technical Reports Server (NTRS)
Barone, Joseph M.
1992-01-01
Efforts to develop evaluation methods for fuzzy inference systems which are not based on crisp, quantitative data or processes (i.e., where the phenomenon the system is built to describe or control is inherently fuzzy) are just beginning. This paper suggests that the method of fuzzy least squares can be used to perform such evaluations. Regressing the desired outputs onto the inferred outputs can provide both global and local measures of success. The global measures have some value in an absolute sense, but they are particularly useful when competing solutions (e.g., different numbers of rules, different fuzzy input partitions) are being compared. The local measure described here can be used to identify specific areas of poor fit where special measures (e.g., the use of emphatic or suppressive rules) can be applied. Several examples are discussed which illustrate the applicability of the method as an evaluation tool.
2-D weighted least-squares phase unwrapping
Ghiglia, D.C.; Romero, L.A.
1995-06-13
Weighted values of interferometric signals are unwrapped by determining the least squares solution of phase unwrapping for unweighted values of the interferometric signals; and then determining the least squares solution of phase unwrapping for weighted values of the interferometric signals by preconditioned conjugate gradient methods using the unweighted solutions as preconditioning values. An output is provided that is representative of the least squares solution of phase unwrapping for weighted values of the interferometric signals. 6 figs.
2-D weighted least-squares phase unwrapping
Ghiglia, Dennis C.; Romero, Louis A.
1995-01-01
Weighted values of interferometric signals are unwrapped by determining the least squares solution of phase unwrapping for unweighted values of the interferometric signals; and then determining the least squares solution of phase unwrapping for weighted values of the interferometric signals by preconditioned conjugate gradient methods using the unweighted solutions as preconditioning values. An output is provided that is representative of the least squares solution of phase unwrapping for weighted values of the interferometric signals.
Spreadsheet for designing valid least-squares calibrations: A tutorial.
Bettencourt da Silva, Ricardo J N
2016-02-01
Instrumental methods of analysis are used to define the price of goods, the compliance of products with a regulation, or the outcome of fundamental or applied research. These methods can only play their role properly if reported information is objective and their quality is fit for the intended use. If measurement results are reported with an adequately small measurement uncertainty both of these goals are achieved. The evaluation of the measurement uncertainty can be performed by the bottom-up approach, that involves a detailed description of the measurement process, or using a pragmatic top-down approach that quantify major uncertainty components from global performance data. The bottom-up approach is not so frequently used due to the need to master the quantification of individual components responsible for random and systematic effects that affect measurement results. This work presents a tutorial that can be easily used by non-experts in the accurate evaluation of the measurement uncertainty of instrumental methods of analysis calibrated using least-squares regressions. The tutorial includes the definition of the calibration interval, the assessments of instrumental response homoscedasticity, the definition of calibrators preparation procedure required for least-squares regression model application, the assessment of instrumental response linearity and the evaluation of measurement uncertainty. The developed measurement model is only applicable in calibration ranges where signal precision is constant. A MS-Excel file is made available to allow the easy application of the tutorial. This tool can be useful for cases where top-down approaches cannot produce results with adequately low measurement uncertainty. An example of the application of this tool to the determination of nitrate in water by ion chromatography is presented.
Reconstruction of generic shape with cubic Bézier using least square method
NASA Astrophysics Data System (ADS)
Rusdi, Nur'Afifah; Yahya, Zainor Ridzuan
2015-05-01
Reverse engineering procedure has been used to represent the generic shapes of some objects in order to explore its technical principles and mechanism so that an improved system can be develops. Curve reconstruction had immensely used in reverse engineering to reproduce the curves. In this paper, cubic Bézier curve function was used for curve fitting by Least Square Method. Least Square Method is applied to find the optimal values of the parameters in the description of the curve function used.
Orthogonalizing EM: A design-based least squares algorithm.
Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z G
We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p. Supplementary materials for this article are available online.
Orthogonalizing EM: A design-based least squares algorithm
Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z. G.
2016-01-01
We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p. Supplementary materials for this article are available online. PMID:27499558
Lecture Graphic Aids for Least-Squares Analysis.
ERIC Educational Resources Information Center
Henderson, Giles
1988-01-01
Discusses the least squares technique and states the difficulty in finding suitable examples for using it in the chemical laboratory. Provides several least squares equations and the accompanying three dimensional drawings. Notes graphic lecture aids are available from the author as either slides or overheads. (MVL)
Solution of a Complex Least Squares Problem with Constrained Phase.
Bydder, Mark
2010-12-30
The least squares solution of a complex linear equation is in general a complex vector with independent real and imaginary parts. In certain applications in magnetic resonance imaging, a solution is desired such that each element has the same phase. A direct method for obtaining the least squares solution to the phase constrained problem is described.
Optimal least-squares finite element method for elliptic problems
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Povinelli, Louis A.
1991-01-01
An optimal least squares finite element method is proposed for two dimensional and three dimensional elliptic problems and its advantages are discussed over the mixed Galerkin method and the usual least squares finite element method. In the usual least squares finite element method, the second order equation (-Delta x (Delta u) + u = f) is recast as a first order system (-Delta x p + u = f, Delta u - p = 0). The error analysis and numerical experiment show that, in this usual least squares finite element method, the rate of convergence for flux p is one order lower than optimal. In order to get an optimal least squares method, the irrotationality Delta x p = 0 should be included in the first order system.
Partitionability of Implicit Least Squares Model Fitting Problems
1981-05-01
PROBLEM ......... .................. 5 3. ITERATION ALGORITHMS AND EFFECTS OF DATA PERTURBATIONS. . . 9 4. PARTITIONABILITY...equations can be used to analyze the sensitivity of the solution to data perturbations, and to obtain numerical values of optimal residuals and parameters...However, the numerical solution of the system is not necessarily trivial, because the size of the system is proportional to the number of data
A least squares closure approximation for liquid crystalline polymers
NASA Astrophysics Data System (ADS)
Sievenpiper, Traci Ann
2011-12-01
An introduction to existing closure schemes for the Doi-Hess kinetic theory of liquid crystalline polymers is provided. A new closure scheme is devised based on a least squares fit of a linear combination of the Doi, Tsuji-Rey, Hinch-Leal I, and Hinch-Leal II closure schemes. The orientation tensor and rate-of-strain tensor are fit separately using data generated from the kinetic solution of the Smoluchowski equation. The known behavior of the kinetic solution and existing closure schemes at equilibrium is compared with that of the new closure scheme. The performance of the proposed closure scheme in simple shear flow for a variety of shear rates and nematic polymer concentrations is examined, along with that of the four selected existing closure schemes. The flow phase diagram for the proposed closure scheme under the conditions of shear flow is constructed and compared with that of the kinetic solution. The study of the closure scheme is extended to the simulation of nematic polymers in plane Couette cells. The results are compared with existing kinetic simulations for a Landau-deGennes mesoscopic model with the application of a parameterized closure approximation. The proposed closure scheme is shown to produce a reasonable approximation to the kinetic results in the case of simple shear flow and plane Couette flow.
Applied Algebra: The Modeling Technique of Least Squares
ERIC Educational Resources Information Center
Zelkowski, Jeremy; Mayes, Robert
2008-01-01
The article focuses on engaging students in algebra through modeling real-world problems. The technique of least squares is explored, encouraging students to develop a deeper understanding of the method. (Contains 2 figures and a bibliography.)
Applied Algebra: The Modeling Technique of Least Squares
ERIC Educational Resources Information Center
Zelkowski, Jeremy; Mayes, Robert
2008-01-01
The article focuses on engaging students in algebra through modeling real-world problems. The technique of least squares is explored, encouraging students to develop a deeper understanding of the method. (Contains 2 figures and a bibliography.)
Fast Algorithms for Structural Analysis, Least Squares and Related Computations.
1986-08-14
several objectives. We wish to implement and test a recent parallel block schemne by Golub, Sameh .... 6 1 -2- and the principal investigator, on the...Computations (Joint with G. Golub and A. Sameh ) Large scale least squares computations arise in a variety of scientific and engineering problems...testbed of structural analysis data. 3. Geodetic Least Squares Adjustment Techniques on the Cedar System (Joint with W. Harrod and A. Sameh ) V, Our purpose
Performance Analysis of the Least-Squares Estimator in Astrometry
NASA Astrophysics Data System (ADS)
Lobos, Rodrigo A.; Silva, Jorge F.; Mendez, Rene A.; Orchard, Marcos
2015-11-01
We characterize the performance of the widely-used least-squares estimator in astrometry in terms of a comparison with the Cramer-Rao lower variance bound. In this inference context the performance of the least-squares estimator does not offer a closed-form expression, but a new result is presented (Theorem 1) where both the bias and the mean-square-error of the least-squares estimator are bounded and approximated analytically, in the latter case in terms of a nominal value and an interval around it. From the predicted nominal value we analyze how efficient is the least-squares estimator in comparison with the minimum variance Cramer-Rao bound. Based on our results, we show that, for the high signal-to-noise ratio regime, the performance of the least-squares estimator is significantly poorer than the Cramer-Rao bound, and we characterize this gap analytically. On the positive side, we show that for the challenging low signal-to-noise regime (attributed to either a weak astronomical signal or a noise-dominated condition) the least-squares estimator is near optimal, as its performance asymptotically approaches the Cramer-Rao bound. However, we also demonstrate that, in general, there is no unbiased estimator for the astrometric position that can precisely reach the Cramer-Rao bound. We validate our theoretical analysis through simulated digital-detector observations under typical observing conditions. We show that the nominal value for the mean-square-error of the least-squares estimator (obtained from our theorem) can be used as a benchmark indicator of the expected statistical performance of the least-squares method under a wide range of conditions. Our results are valid for an idealized linear (one-dimensional) array detector where intra-pixel response changes are neglected, and where flat-fielding is achieved with very high accuracy.
Zhu, W; Wang, Y; Yao, Y; Chang, J; Graber, H L; Barbour, R L
1997-04-01
We present an iterative total least-squares algorithm for computing images of the interior structure of highly scattering media by using the conjugate gradient method. For imaging the dense scattering media in optical tomography, a perturbation approach has been described previously [Y. Wang et al., Proc. SPIE 1641, 58 (1992); R. L. Barbour et al., in Medical Optical Tomography: Functional Imaging and Monitoring (Society of Photo-Optical Instrumentation Engineers, Bellingham, Wash., 1993), pp. 87-120], which solves a perturbation equation of the form W delta x = delta I. In order to solve this equation, least-squares or regularized least-squares solvers have been used in the past to determine best fits to the measurement data delta I while assuming that the operator matrix W is accurate. In practice, errors also occur in the operator matrix. Here we propose an iterative total least-squares (ITLS) method that minimizes the errors in both weights and detector readings. Theoretically, the total least-squares (TLS) solution is given by the singular vector of the matrix [W/ delta I] associated with the smallest singular value. The proposed ITLS method obtains this solution by using a conjugate gradient method that is particularly suitable for very large matrices. Simulation results have shown that the TLS method can yield a significantly more accurate result than the least-squares method.
Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2017-01-01
A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.
Torello, David; Kim, Jin-Yeon; Qu, Jianmin; Jacobs, Laurence J.
2015-03-31
This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. These experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.
NASA Astrophysics Data System (ADS)
Torello, David; Kim, Jin-Yeon; Qu, Jianmin; Jacobs, Laurence J.
2015-03-01
This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β11 is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. These experiments are conducted on aluminum 2024 and 7075 specimens and a β117075/β112024 measure of 1.363 agrees well with previous literature and earlier work.
Bootstrapping Nonlinear Least Squares Estimates in the Kalman Filter Model.
1986-01-01
Bias Bootstrapa 3.933 x 103 0.651 x 103 -0.166 x 10-- b b Newton - Rapshon 1.380 x 10- 0.479 x 10- 10_c 0_ c , e -.., Emperical 3.605 x 10 -0.026 x 10...most cases, parameter estimation for the KF model has been accomplished by maximum likelihood techniques involving the use of scoring or Newton ...is well behaved, the Newton -Raphson and scoring procedures enjoy quadratic convergence in the neighborhood of the maximum and one has a ready-made
Integer least-squares theory for the GNSS compass
NASA Astrophysics Data System (ADS)
Teunissen, P. J. G.
2010-07-01
Global navigation satellite system (GNSS) carrier phase integer ambiguity resolution is the key to high-precision positioning and attitude determination. In this contribution, we develop new integer least-squares (ILS) theory for the GNSS compass model, together with efficient integer search strategies. It extends current unconstrained ILS theory to the nonlinearly constrained case, an extension that is particularly suited for precise attitude determination. As opposed to current practice, our method does proper justice to the a priori given information. The nonlinear baseline constraint is fully integrated into the ambiguity objective function, thereby receiving a proper weighting in its minimization and providing guidance for the integer search. Different search strategies are developed to compute exact and approximate solutions of the nonlinear constrained ILS problem. Their applicability depends on the strength of the GNSS model and on the length of the baseline. Two of the presented search strategies, a global and a local one, are based on the use of an ellipsoidal search space. This has the advantage that standard methods can be applied. The global ellipsoidal search strategy is applicable to GNSS models of sufficient strength, while the local ellipsoidal search strategy is applicable to models for which the baseline lengths are not too small. We also develop search strategies for the most challenging case, namely when the curvature of the non-ellipsoidal ambiguity search space needs to be taken into account. Two such strategies are presented, an approximate one and a rigorous, somewhat more complex, one. The approximate one is applicable when the fixed baseline variance matrix is close to diagonal. Both methods make use of a search and shrink strategy. The rigorous solution is efficiently obtained by means of a search and shrink strategy that uses non-quadratic, but easy-to-evaluate, bounding functions of the ambiguity objective function. The theory
Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang
2016-03-01
An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS.
Least-squares RTM with L1 norm regularisation
NASA Astrophysics Data System (ADS)
Wu, Di; Yao, Gang; Cao, Jingjie; Wang, Yanghua
2016-10-01
Reverse time migration (RTM), for imaging complex Earth models, is a reversal procedure of the forward modelling of seismic wavefields, and hence can be formulated as an inverse problem. The least-squares RTM method attempts to minimise the difference between the observed field data and the synthetic data generated by the migration image. It can reduce the artefacts in the images of a conventional RTM which uses an adjoint operator, instead of an inverse operator, for the migration. However, as the least-squares inversion provides an average solution with minimal variation, the resolution of the reflectivity image is compromised. This paper presents the least-squares RTM method with a model constraint defined by an L1-norm of the reflectivity image. For solving the least-squares RTM with L1 norm regularisation, the inversion is reformulated as a ‘basis pursuit de-noise (BPDN)’ problem, and is solved directly using an algorithm called ‘spectral projected gradient for L1 minimisation (SPGL1)’. Three numerical examples demonstrate the effectiveness of the method which can mitigate artefacts and produce clean images with significantly higher resolution than the least-squares RTM without such a constraint.
Total Least-Squares Adjustment of Condition Equations
NASA Astrophysics Data System (ADS)
Schaffrin, Burkhard; Wieser, Andreas
2010-05-01
The usual least-squares adjustment within an Errors-in-Variables (EIV) model is often described as Total Least-Squares Solution (TLSS), just as the usual least-squares adjustment within a Random Effects Model (REM) has become popular under the name of Least-Squares Collocation (without trend). In comparison to the standard Gauss-Markov Model (GMM), the EIV-Model is less informative whereas the REM is more informative. It is known under which conditions exactly the GMM or the REM can be equivalently replaced by a model of Condition Equations or, more generally, by a Gauss-Helmert-Model (GHM). Such equivalency conditions are, however, still unknown for the EIV-Model once it is transformed into such a model of Condition Equations. In a first step, it is shown in this contribution how the respective residual vector and residual matrix would look like if the Total Least-Squares Solution is applied to condition equations with a random coefficient matrix to describe the transformation of the random error vector. The results are demonstrated using numeric examples which show that this approach may be valuable in its own right.
A Least-Squares Transport Equation Compatible with Voids
Hansen, Jon; Peterson, Jacob; Morel, Jim; Ragusa, Jean; Wang, Yaqi
2014-12-01
Standard second-order self-adjoint forms of the transport equation, such as the even-parity, odd-parity, and self-adjoint angular flux equation, cannot be used in voids. Perhaps more important, they experience numerical convergence difficulties in near-voids. Here we present a new form of a second-order self-adjoint transport equation that has an advantage relative to standard forms in that it can be used in voids or near-voids. Our equation is closely related to the standard least-squares form of the transport equation with both equations being applicable in a void and having a nonconservative analytic form. However, unlike the standard least-squares form of the transport equation, our least-squares equation is compatible with source iteration. It has been found that the standard least-squares form of the transport equation with a linear-continuous finite-element spatial discretization has difficulty in the thick diffusion limit. Here we extensively test the 1D slab-geometry version of our scheme with respect to void solutions, spatial convergence rate, and the intermediate and thick diffusion limits. We also define an effective diffusion synthetic acceleration scheme for our discretization. Our conclusion is that our least-squares S_{n} formulation represents an excellent alternative to existing second-order S_{n} transport formulations
On the least-square estimation of parameters for statistical diffusion weighted imaging model.
Yuan, Jing; Zhang, Qinwei
2013-01-01
Statistical model for diffusion-weighted imaging (DWI) has been proposed for better tissue characterization by introducing a distribution function for apparent diffusion coefficients (ADC) to account for the restrictions and hindrances to water diffusion in biological tissues. This paper studies the precision and uncertainty in the estimation of parameters for statistical DWI model with Gaussian distribution, i.e. the position of distribution maxima (Dm) and the distribution width (σ), by using non-linear least-square (NLLS) fitting. Numerical simulation shows that precise parameter estimation, particularly for σ, imposes critical requirements on the extremely high signal-to-noise ratio (SNR) of DWI signal when NLLS fitting is used. Unfortunately, such extremely high SNR may be difficult to achieve for the normal setting of clinical DWI scan. For Dm and σ parameter mapping of in vivo human brain, multiple local minima are found and result in large uncertainties in the estimation of distribution width σ. The estimation error by using NLLS fitting originates primarily from the insensitivity of DWI signal intensity to distribution width σ, as given in the function form of the Gaussian-type statistical DWI model.
Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Coroneos, Rula M.
2007-01-01
Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.
From direct-space discrepancy functions to crystallographic least squares.
Giacovazzo, Carmelo
2015-01-01
Crystallographic least squares are a fundamental tool for crystal structure analysis. In this paper their properties are derived from functions estimating the degree of similarity between two electron-density maps. The new approach leads also to modifications of the standard least-squares procedures, potentially able to improve their efficiency. The role of the scaling factor between observed and model amplitudes is analysed: the concept of unlocated model is discussed and its scattering contribution is combined with that arising from the located model. Also, the possible use of an ancillary parameter, to be associated with the classical weight related to the variance of the observed amplitudes, is studied. The crystallographic discrepancy factors, basic tools often combined with least-squares procedures in phasing approaches, are analysed. The mathematical approach here described includes, as a special case, the so-called vector refinement, used when accurate estimates of the target phases are available.
Partial least squares and random sample consensus in outlier detection.
Peng, Jiangtao; Peng, Silong; Hu, Yong
2012-03-16
A novel outlier detection method in partial least squares based on random sample consensus is proposed. The proposed algorithm repeatedly generates partial least squares solutions estimated from random samples and then tests each solution for the support from the complete dataset for consistency. A comparative study of the proposed method and leave-one-out cross validation in outlier detection on simulated data and near-infrared data of pharmaceutical tablets is presented. In addition, a comparison between the proposed method and PLS, RSIMPLS, PRM is provided. The obtained results demonstrate that the proposed method is highly efficient.
Analysis of total least squares in estimating the parameters of a mortar trajectory
Lau, D.L.; Ng, L.C.
1994-12-01
Least Squares (LS) is a method of curve fitting used with the assumption that error exists in the observation vector. The method of Total Least Squares (TLS) is more useful in cases where there is error in the data matrix as well as the observation vector. This paper describes work done in comparing the LS and TLS results for parameter estimation of a mortar trajectory based on a time series of angular observations. To improve the results, we investigated several derivations of the LS and TLS methods, and early findings show TLS provided slightly, 10%, improved results over the LS method.
Lane detection and tracking based on improved Hough transform and least-squares method
NASA Astrophysics Data System (ADS)
Sun, Peng; Chen, Hui
2014-11-01
Lane detection and tracking play important roles in lane departure warning system (LDWS). In order to improve the real-time performance and obtain better lane detection results, an improved algorithm of lane detection and tracking based on combination of improved Hough transform and least-squares fitting method is proposed in this paper. In the image pre-processing stage, firstly a multi-gradient Sobel operator is used to obtain the edge map of road images, secondly adaptive Otsu algorithm is used to obtain binary image, and in order to meet the precision requirements of single pixel, fast parallel thinning algorithm is used to get the skeleton map of binary image. And then, lane lines are initially detected by using polar angle constraint Hough transform, which has narrowed the scope of searching. At last, during the tracking phase, based on the detection result of the previous image frame, a dynamic region of interest (ROI) is set up, and within the predicted dynamic ROI, least-squares fitting method is used to fit the lane line, which has greatly reduced the algorithm calculation. And also a failure judgment module is added in this paper to improve the detection reliability. When the least-squares fitting method is failed, the polar angle constraint Hough transform is restarted for initial detection, which has achieved a coordination of Hough transform and least-squares fitting method. The algorithm in this paper takes into account the robustness of Hough transform and the real-time performance of least-squares fitting method, and sets up a dynamic ROI for lane detection. Experimental results show that it has a good performance of lane recognition, and the average time to complete the preprocessing and lane recognition of one road map is less than 25ms, which has proved that the algorithm has good real-time performance and strong robustness.
Foundations for estimation by the method of least squares
NASA Technical Reports Server (NTRS)
Hauck, W. W., Jr.
1971-01-01
Least squares estimation is discussed from the point of view of a statistician. Much of the emphasis is on problems encountered in application and, more specifically, on questions involving assumptions: what assumptions are needed, when are they needed, what happens if they are not valid, and if they are invalid, how that fact can be detected.
Using partial least squares regression to analyze cellular response data.
Kreeger, Pamela K
2013-04-16
This Teaching Resource provides lecture notes, slides, and a problem set for a lecture introducing the mathematical concepts and interpretation of partial least squares regression (PLSR) that were part of a course entitled "Systems Biology: Mammalian Signaling Networks." PLSR is a multivariate regression technique commonly applied to analyze relationships between signaling or transcriptional data and cellular behavior.
Least-Squares Adaptive Control Using Chebyshev Orthogonal Polynomials
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Burken, John; Ishihara, Abraham
2011-01-01
This paper presents a new adaptive control approach using Chebyshev orthogonal polynomials as basis functions in a least-squares functional approximation. The use of orthogonal basis functions improves the function approximation significantly and enables better convergence of parameter estimates. Flight control simulations demonstrate the effectiveness of the proposed adaptive control approach.
The Least-Squares Estimation of Latent Trait Variables.
ERIC Educational Resources Information Center
Tatsuoka, Kikumi
This paper presents a new method for estimating a given latent trait variable by the least-squares approach. The beta weights are obtained recursively with the help of Fourier series and expressed as functions of item parameters of response curves. The values of the latent trait variable estimated by this method and by maximum likelihood method…
Software For Least-Squares And Robust Estimation
NASA Technical Reports Server (NTRS)
Jeffreys, William H.; Fitzpatrick, Michael J.; Mcarthur, Barbara E.; Mccartney, James
1990-01-01
GAUSSFIT computer program includes full-featured programming language facilitating creation of mathematical models solving least-squares and robust-estimation problems. Programming language designed to make it easy to specify complex reduction models. Written in 100 percent C language.
Neither fixed nor random: weighted least squares meta-analysis.
Stanley, T D; Doucouliagos, Hristos
2015-06-15
This study challenges two core conventional meta-analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random-effects meta-analysis when there is publication (or small-sample) bias and better than a fixed-effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small-sample) bias and identical to fixed-effect meta-analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed-effect meta-analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects.
A new linear least squares method for T1 estimation from SPGR signals with multiple TRs
NASA Astrophysics Data System (ADS)
Chang, Lin-Ching; Koay, Cheng Guan; Basser, Peter J.; Pierpaoli, Carlo
2009-02-01
The longitudinal relaxation time, T1, can be estimated from two or more spoiled gradient recalled echo x (SPGR) images with two or more flip angles and one or more repetition times (TRs). The function relating signal intensity and the parameters are nonlinear; T1 maps can be computed from SPGR signals using nonlinear least squares regression. A widely-used linear method transforms the nonlinear model by assuming a fixed TR in SPGR images. This constraint is not desirable since multiple TRs are a clinically practical way to reduce the total acquisition time, to satisfy the required resolution, and/or to combine SPGR data acquired at different times. A new linear least squares method is proposed using the first order Taylor expansion. Monte Carlo simulations of SPGR experiments are used to evaluate the accuracy and precision of the estimated T1 from the proposed linear and the nonlinear methods. We show that the new linear least squares method provides T1 estimates comparable in both precision and accuracy to those from the nonlinear method, allowing multiple TRs and reducing computation time significantly.
Unleashing Empirical Equations with "Nonlinear Fitting" and "GUM Tree Calculator"
NASA Astrophysics Data System (ADS)
Lovell-Smith, J. W.; Saunders, P.; Feistel, R.
2017-10-01
Empirical equations having large numbers of fitted parameters, such as the international standard reference equations published by the International Association for the Properties of Water and Steam (IAPWS), which form the basis of the "Thermodynamic Equation of Seawater—2010" (TEOS-10), provide the means to calculate many quantities very accurately. The parameters of these equations are found by least-squares fitting to large bodies of measurement data. However, the usefulness of these equations is limited since uncertainties are not readily available for most of the quantities able to be calculated, the covariance of the measurement data is not considered, and further propagation of the uncertainty in the calculated result is restricted since the covariance of calculated quantities is unknown. In this paper, we present two tools developed at MSL that are particularly useful in unleashing the full power of such empirical equations. "Nonlinear Fitting" enables propagation of the covariance of the measurement data into the parameters using generalized least-squares methods. The parameter covariance then may be published along with the equations. Then, when using these large, complex equations, "GUM Tree Calculator" enables the simultaneous calculation of any derived quantity and its uncertainty, by automatic propagation of the parameter covariance into the calculated quantity. We demonstrate these tools in exploratory work to determine and propagate uncertainties associated with the IAPWS-95 parameters.
On derivative estimation and the solution of least squares problems
NASA Astrophysics Data System (ADS)
Belward, John A.; Turner, Ian W.; Ilic, Milos
2008-12-01
Surface interpolation finds application in many aspects of science and technology. Two specific areas of interest are surface reconstruction techniques for plant architecture and approximating cell face fluxes in the finite volume discretisation strategy for solving partial differential equations numerically. An important requirement of both applications is accurate local gradient estimation. In surface reconstruction this gradient information is used to increase the accuracy of the local interpolant, while in the finite volume framework accurate gradient information is essential to ensure second order spatial accuracy of the discretisation. In this work two different least squares strategies for approximating these local gradients are investigated and the errors associated with each analysed. It is shown that although the two strategies appear different, they produce the same least squares error. Some carefully chosen case studies are used to elucidate this finding.
Anisotropy minimization via least squares method for transformation optics.
Junqueira, Mateus A F C; Gabrielli, Lucas H; Spadoti, Danilo H
2014-07-28
In this work the least squares method is used to reduce anisotropy in transformation optics technique. To apply the least squares method a power series is added on the coordinate transformation functions. The series coefficients were calculated to reduce the deviations in Cauchy-Riemann equations, which, when satisfied, result in both conformal transformations and isotropic media. We also present a mathematical treatment for the special case of transformation optics to design waveguides. To demonstrate the proposed technique a waveguide with a 30° of bend and with a 50% of increase in its output width was designed. The results show that our technique is simultaneously straightforward to be implement and effective in reducing the anisotropy of the transformation for an extremely low value close to zero.
Least-squares finite element methods for quantum chromodynamics
Ketelsen, Christian; Brannick, J; Manteuffel, T; Mccormick, S
2008-01-01
A significant amount of the computational time in large Monte Carlo simulations of lattice quantum chromodynamics (QCD) is spent inverting the discrete Dirac operator. Unfortunately, traditional covariant finite difference discretizations of the Dirac operator present serious challenges for standard iterative methods. For interesting physical parameters, the discretized operator is large and ill-conditioned, and has random coefficients. More recently, adaptive algebraic multigrid (AMG) methods have been shown to be effective preconditioners for Wilson's discretization of the Dirac equation. This paper presents an alternate discretization of the Dirac operator based on least-squares finite elements. The discretization is systematically developed and physical properties of the resulting matrix system are discussed. Finally, numerical experiments are presented that demonstrate the effectiveness of adaptive smoothed aggregation ({alpha}SA ) multigrid as a preconditioner for the discrete field equations resulting from applying the proposed least-squares FE formulation to a simplified test problem, the 2d Schwinger model of quantum electrodynamics.
A Christoffel function weighted least squares algorithm for collocation approximations
Narayan, Akil; Jakeman, John D.; Zhou, Tao
2016-11-28
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
A Christoffel function weighted least squares algorithm for collocation approximations
Narayan, Akil; Jakeman, John D.; Zhou, Tao
2016-11-28
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis to motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.
Least-squares finite element methods for compressible Euler equations
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Carey, G. F.
1990-01-01
A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.
Vieira, Vasco M. N. C. S.; Engelen, Aschwin H.; Huanel, Oscar R.; Guillemin, Marie-Laure
2016-01-01
Survival is a fundamental demographic component and the importance of its accurate estimation goes beyond the traditional estimation of life expectancy. The evolutionary stability of isomorphic biphasic life-cycles and the occurrence of its different ploidy phases at uneven abundances are hypothesized to be driven by differences in survival rates between haploids and diploids. We monitored Gracilaria chilensis, a commercially exploited red alga with an isomorphic biphasic life-cycle, having found density-dependent survival with competition and Allee effects. While estimating the linear-in-the-parameters survival function, all model I regression methods (i.e, vertical least squares) provided biased line-fits rendering them inappropriate for studies about ecology, evolution or population management. Hence, we developed an iterative two-step non-linear model II regression (i.e, oblique least squares), which provided improved line-fits and estimates of survival function parameters, while robust to the data aspects that usually turn the regression methods numerically unstable. PMID:27936048
Least-squares finite element method for fluid dynamics
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Povinelli, Louis A.
1989-01-01
An overview is given of new developments of the least squares finite element method (LSFEM) in fluid dynamics. Special emphasis is placed on the universality of LSFEM; the symmetry and positiveness of the algebraic systems obtained from LSFEM; the accommodation of LSFEM to equal order interpolations for incompressible viscous flows; and the natural numerical dissipation of LSFEM for convective transport problems and high speed compressible flows. The performance of LSFEM is illustrated by numerical examples.
Conjugate Gradient Methods for Constrained Least Squares Problems
1990-01-01
TINO Hi!AGL . edi"o ar m m Conjugate Gradient Methods for Constrained Least Squares Problems by Douglas James A thesis 3ubmitted to the Graduate Faculty...Breakdown of Incomplete QR Factorizations The research which led to this dissertation began with a look at incomplete QR preconditioners for ordinary...AFIT/CI/CIA- 90-013D 6a. NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBOL 7&. NAME OF MONITORING ORGANIZATION AFIT STUDENT AT (if applicable) AFIT/CIA
Compressible flow calculations employing the Galerkin/least-squares method
NASA Technical Reports Server (NTRS)
Shakib, F.; Hughes, T. J. R.; Johan, Zdenek
1989-01-01
A multielement group, domain decomposition algorithm is presented for solving linear nonsymmetric systems arising in the finite-element analysis of compressible flows employing the Galerkin/least-squares method. The iterative strategy employed is based on the generalized minimum residual (GMRES) procedure originally proposed by Saad and Shultz. Two levels of preconditioning are investigated. Applications to problems of high-speed compressible flow illustrate the effectiveness of the scheme.
A new least-squares transport equation compatible with voids
Hansen, J. B.; Morel, J. E.
2013-07-01
We define a new least-squares transport equation that is applicable in voids, can be solved using source iteration with diffusion-synthetic acceleration, and requires only the solution of an independent set of second-order self-adjoint equations for each direction during each source iteration. We derive the equation, discretize it using the S{sub n} method in conjunction with a linear-continuous finite-element method in space, and computationally demonstrate various of its properties. (authors)
Advanced Online Flux Mapping of CANDU PHWR by Least-Squares Method
Hong, In Seob; Kim, Chang Hyo; Suk, Ho Chun
2005-07-15
A least-squares method that solves both the core neutronics design equations and the in-core detector response equations on the least-squares principle is presented as a new advanced online flux-mapping method for CANada Deuterium Uranium (CANDU) pressurized heavy water reactors (PHWRs). The effectiveness of the new flux-mapping method is examined in terms of online flux-mapping calculations with numerically simulated true flux distribution and detector signals and those with the actual core-follow data for the Wolsong CANDU PHWRs in Korea. The effects of core neutronics models as well as the detector failures and uncertainties of measured detector signals on the effectiveness of the least-squares flux-mapping calculations are also examined.The following results are obtained. The least-squares method predicts the flux distribution in better agreement with the simulated true flux distribution than the standard core neutronics calculations by the finite difference method (FDM) computer code without using the detector signals. The adoption of the nonlinear nodal method based on the unified nodal method formulation instead of the FDM results in a significant improvement in prediction accuracy of the flux-mapping calculations. The detector signals estimated from the least-squares flux-mapping calculations are much closer to the measured detector signals than those from the flux synthesis method (FSM), the current online flux-mapping method for CANDU reactors. The effect of detector failures is relatively small so that the plant can tolerate up to 25% of detector failures without seriously affecting the plant operation. The detector signal uncertainties aggravate accuracy of the flux-mapping calculations, yet the effects of signal uncertainties of the order of 1% standard deviation can be tolerable without seriously degrading the prediction accuracy of the least-squares method. The least-squares method is disadvantageous because it requires longer CPU time than the
NASA Astrophysics Data System (ADS)
Brooks, Gregory P.; Powers, Joseph M.
2004-03-01
A novel Karhunen-Loève (KL) least-squares model for the supersonic flow of an inviscid, calorically perfect ideal gas about an axisymmetric blunt body employing shock-fitting is developed; the KL least-squares model is used to accurately select an optimal configuration which minimizes drag. Accuracy and efficiency of the KL method is compared to a pseudospectral method employing global Lagrange interpolating polynomials. KL modes are derived from pseudospectral solutions at Mach 3.5 from a uniform sampling of the design space and subsequently employed as the trial functions for a least-squares method of weighted residuals. Results are presented showing the high accuracy of the method with less than 10 KL modes. Close agreement is found between the optimal geometry found using the KL model to that found from the pseudospectral solver. Not including the cost of sampling the design space and building the KL model, the KL least-squares method requires less than half the central processing unit time as the pseudospectral method to achieve the same level of accuracy. A decrease in computational cost of several orders of magnitude as reported in the literature when comparing the KL method against discrete solvers is shown not to hold for the current problem. The efficiency is lost because the nature of the nonlinearity renders a priori evaluation of certain necessary integrals impossible, requiring as a consequence many costly reevaluations of the integrals.
Recursive least-squares learning algorithms for neural networks
Lewis, P.S. ); Hwang, Jenq-Neng . Dept. of Electrical Engineering)
1990-01-01
This paper presents the development of a pair of recursive least squares (RLS) algorithms for online training of multilayer perceptrons, which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation, either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is in the order of (N{sup 2}), where N is the number of network parameters. This is due to the estimation of the N {times} N inverse Hessian matrix. Less computationally intensive approximations of the RLS algorithms can be easily derived by using only block diagonal elements of this matrix, thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example, RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6331). 14 refs., 3 figs.
Multilevel first-order system least squares for PDEs
McCormick, S.
1994-12-31
The purpose of this talk is to analyze the least-squares finite element method for second-order convection-diffusion equations written as a first-order system. In general, standard Galerkin finite element methods applied to non-self-adjoint elliptic equations with significant convection terms exhibit a variety of deficiencies, including oscillations or nonmonotonicity of the solution and poor approximation of its derivatives, A variety of stabilization techniques, such as up-winding, Petrov-Galerkin, and stream-line diffusion approximations, have been introduced to eliminate these and other drawbacks of standard Galerkin methods. Yet, although significant progress has been made, convection-diffusion problems remain among the more difficult problems to solve numerically. The first-order system least-squares approach promises to overcome these deficiencies. This talk develops ellipticity estimates and discretization error bounds for elliptic equations (with lower order terms) that are reformulated as a least-squares problem for an equivalent first-order system. The main results are the proofs of ellipticity and optimal convergence of multiplicative and additive solvers of the discrete systems.
Solving linear inequalities in a least squares sense
Bramley, R.; Winnicka, B.
1994-12-31
Let A {element_of} {Re}{sup mxn} be an arbitrary real matrix, and let b {element_of} {Re}{sup m} a given vector. A familiar problem in computational linear algebra is to solve the system Ax = b in a least squares sense; that is, to find an x* minimizing {parallel}Ax {minus} b{parallel}, where {parallel} {center_dot} {parallel} refers to the vector two-norm. Such an x* solves the normal equations A{sup T}(Ax {minus} b) = 0, and the optimal residual r* = b {minus} Ax* is unique (although x* need not be). The least squares problem is usually interpreted as corresponding to multiple observations, represented by the rows of A and b, on a vector of data x. The observations may be inconsistent, and in this case a solution is sought that minimizes the norm of the residuals. A less familiar problem to numerical linear algebraists is the solution of systems of linear inequalities Ax {le} b in a least squares sense, but the motivation is similar: if a set of observations places upper or lower bounds on linear combinations of variables, the authors want to find x* minimizing {parallel} (Ax {minus} b){sub +} {parallel}, where the i{sup th} component of the vector v{sub +} is the maximum of zero and the i{sup th} component of v.
Preprocessing Inconsistent Linear System for a Meaningful Least Squares Solution
NASA Technical Reports Server (NTRS)
Sen, Syamal K.; Shaykhian, Gholam Ali
2011-01-01
Mathematical models of many physical/statistical problems are systems of linear equations. Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.
Moving least-squares enhanced Shepard interpolation for the fast marching and string methods
NASA Astrophysics Data System (ADS)
Burger, Steven K.; Liu, Yuli; Sarkar, Utpal; Ayers, Paul W.
2009-01-01
The number of the potential energy calculations required by the quadratic string method (QSM), and the fast marching method (FMM) is significantly reduced by using Shepard interpolation, with a moving least squares to fit the higher-order derivatives of the potential. The derivatives of the potential are fitted up to fifth order. With an error estimate for the interpolated values, this moving least squares enhanced Shepard interpolation scheme drastically reduces the number of potential energy calculations in FMM, often by up 80%. Fitting up through the highest order tested here (fifth order) gave the best results for all grid spacings. For QSM, using enhanced Shepard interpolation gave slightly better results than using the usual second order approximate, damped Broyden-Fletcher-Goldfarb-Shanno updated Hessian to approximate the surface. To test these methods we examined two analytic potentials, the rotational dihedral potential of alanine dipeptide and the SN2 reaction of methyl chloride with fluoride.
A note on the total least squares problem for coplanar points
Lee, S.L.
1994-09-01
The Total Least Squares (TLS) fit to the points (x{sub k}, y{sub k}), k = 1, {hor_ellipsis}, n, minimizes the sum of the squares of the perpendicular distances from the points to the line. This sum is the TLS error, and minimizing its magnitude is appropriate if x{sub k} and y{sub k} are uncertain. A priori formulas for the TLS fit and TLS error to coplanar points were originally derived by Pearson, and they are expressed in terms of the mean, standard deviation and correlation coefficient of the data. In this note, these TLS formulas are derived in a more elementary fashion. The TLS fit is obtained via the ordinary least squares problem and the algebraic properties of complex numbers. The TLS error is formulated in terms of the triangle inequality for complex numbers.
Narayan, Akil; Jakeman, John D.; Zhou, Tao
2016-11-28
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
NASA Astrophysics Data System (ADS)
Abrahamson, S.; Lonnes, S.
1995-11-01
The most common method for determining vorticity from planar velocity information is the circulation method. Its performance has been evaluated using a plane of velocity data obtained from a direct numerical simulation (DNS) of a three dimensional plane shear layer. Both the ability to reproduce the vorticity from the exact velocity field and one perturbed by a 5% random “uncertainty” were assessed. To minimize the sensitivity to velocity uncertainties, a new method was developed using a least-squares approach. The local velocity data is fit to a model velocity field consisting of uniform translation, rigid rotation, a point source, and plane shear. The least-squares method was evaluated in the same manner as the circulation method. The largest differences between the actual and calculated vorticity fields were due to the filter-like nature of the methods. The new method is less sensitive to experimental uncertainty. However the circulation method proved to be slightly better at reproducing the DNS field. The least-squares method provides additional information beyond the circulation method results. Using the correlation overline {Pω ω } and a vorticity threshold criteria to identify regions of rigid rotation (or eddies), the rigid rotation component of the least-squares method indicates these same regions.
[Locally weighted least squares estimation of DPOAE evoked by continuously sweeping primaries].
Han, Xiaoli; Fu, Xinxing; Cui, Jie; Xiao, Ling
2013-12-01
Distortion product otoacoustic emission (DPOAE) signal can be used for diagnosis of hearing loss so that it has an important clinical value. Continuously using sweeping primaries to measure DPOAE provides an efficient tool to record DPOAE data rapidly when DPOAE is measured in a large frequency range. In this paper, locally weighted least squares estimation (LWLSE) of 2f1-f2 DPOAE is presented based on least-squares-fit (LSF) algorithm, in which DPOAE is evoked by continuously sweeping tones. In our study, we used a weighted error function as the loss function and the weighting matrixes in the local sense to obtain a smaller estimated variance. Firstly, ordinary least squares estimation of the DPOAE parameters was obtained. Then the error vectors were grouped and the different local weighting matrixes were calculated in each group. And finally, the parameters of the DPOAE signal were estimated based on least squares estimation principle using the local weighting matrixes. The simulation results showed that the estimate variance and fluctuation errors were reduced, so the method estimates DPOAE and stimuli more accurately and stably, which facilitates extraction of clearer DPOAE fine structure.
Least-squares analysis of the Mueller matrix
NASA Astrophysics Data System (ADS)
Reimer, Michael; Yevick, David
2006-08-01
In a single-mode fiber excited by light with a fixed polarization state, the output polarizations obtained at two different optical frequencies are related by a Mueller matrix. We examine least-squares procedures for estimating this matrix from repeated measurements of the output Stokes vector for a random set of input polarization states. We then apply these methods to the determination of polarization mode dispersion and polarization-dependent loss in an optical fiber. We find that a relatively simple formalism leads to results that are comparable with those of far more involved techniques.
Robust inverse kinematics using damped least squares with dynamic weighting
NASA Technical Reports Server (NTRS)
Schinstock, D. E.; Faddis, T. N.; Greenway, R. B.
1994-01-01
This paper presents a general method for calculating the inverse kinematics with singularity and joint limit robustness for both redundant and non-redundant serial-link manipulators. Damped least squares inverse of the Jacobian is used with dynamic weighting matrices in approximating the solution. This reduces specific joint differential vectors. The algorithm gives an exact solution away from the singularities and joint limits, and an approximate solution at or near the singularities and/or joint limits. The procedure is here implemented for a six d.o.f. teleoperator and a well behaved slave manipulator resulted under teleoperational control.
Single directional SMO algorithm for least squares support vector machines.
Shao, Xigao; Wu, Kun; Liao, Bifeng
2013-01-01
Working set selection is a major step in decomposition methods for training least squares support vector machines (LS-SVMs). In this paper, a new technique for the selection of working set in sequential minimal optimization- (SMO-) type decomposition methods is proposed. By the new method, we can select a single direction to achieve the convergence of the optimality condition. A simple asymptotic convergence proof for the new algorithm is given. Experimental comparisons demonstrate that the classification accuracy of the new method is not largely different from the existing methods, but the training speed is faster than existing ones.
The extended-least-squares treatment of correlated data
Cohen, E.R.; Tuninsky, V.S.
1994-12-31
A generalization of the extended-least-squares algorithms for the case of correlated discrepant data is given. The expressions of the linear, unbiased, minimum-variance estimators (LUMVE) derived before are reformulated. A posteriori estimates of the variance taking into account the inconsistency of all of the experimental data, have the same form as for the case of non-correlated data. These estimates extend the previous improvement on the {open_quotes}traditional{close_quotes} Birge-ratio procedures to the case of correlated input data.
Recursive least squares estimation and Kalman filtering by systolic arrays
NASA Technical Reports Server (NTRS)
Chen, M. J.; Yao, K.
1988-01-01
One of the most promising new directions for high-throughput-rate problems is that based on systolic arrays. In this paper, using the matrix-decomposition approach, a systolic Kalman filter is formulated as a modified square-root information filter consisting of a whitening filter followed by a simple least-squares operation based on the systolic QR algorithm. By proper skewing of the input data, a fully pipelined time and measurement update systolic Kalman filter can be achieved with O(n squared) processing cells, resulting in a system throughput rate of O (n).
Recursive least squares estimation and Kalman filtering by systolic arrays
NASA Technical Reports Server (NTRS)
Chen, M. J.; Yao, K.
1988-01-01
One of the most promising new directions for high-throughput-rate problems is that based on systolic arrays. In this paper, using the matrix-decomposition approach, a systolic Kalman filter is formulated as a modified square-root information filter consisting of a whitening filter followed by a simple least-squares operation based on the systolic QR algorithm. By proper skewing of the input data, a fully pipelined time and measurement update systolic Kalman filter can be achieved with O(n squared) processing cells, resulting in a system throughput rate of O (n).
A semi-implicit finite strain shell algorithm using in-plane strains based on least-squares
NASA Astrophysics Data System (ADS)
Areias, P.; Rabczuk, T.; de Sá, J. César; Natal Jorge, R.
2015-04-01
The use of a semi-implicit algorithm at the constitutive level allows a robust and concise implementation of low-order effective shell elements. We perform a semi-implicit integration in the stress update algorithm for finite strain plasticity: rotation terms (highly nonlinear trigonometric functions) are integrated explicitly and correspond to a change in the (in this case evolving) reference configuration and relative Green-Lagrange strains (quadratic) are used to account for change in the equilibrium configuration implicitly. We parametrize both reference and equilibrium configurations, in contrast with the so-called objective stress integration algorithms which use a common configuration. A finite strain quadrilateral element with least-squares assumed in-plane shear strains (in curvilinear coordinates) and classical transverse shear assumed strains is introduced. It is an alternative to enhanced-assumed-strain (EAS) formulations and, contrary to this, produces an element satisfying ab-initio the Patch test. No additional degrees-of-freedom are present, contrasting with EAS. Least-squares fit allows the derivation of invariant finite strain elements which are both in-plane and out-of-plane shear-locking free and amenable to standardization in commercial codes. Two thickness parameters per node are adopted to reproduce the Poisson effect in bending. Metric components are fully deduced and exact linearization of the shell element is performed. Both isotropic and anisotropic behavior is presented in elasto-plastic and hyperelastic examples.
Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression
Chen, Yanguang
2016-01-01
In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson’s statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran’s index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China’s regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test. PMID:26800271
Nonparametric weighted feature extraction for noise whitening least squares
NASA Astrophysics Data System (ADS)
Ren, Hsuan; Chi, Wan-Wei; Pan, Yen-Nan
2007-09-01
The Least Square (LS) approach is one of the most widely used algorithms for target detection in remote sensing images. It has been proven mathematically that the Noise Whitened Least Square (NWLS) can outperform the original version by making the noise distribution independent and identical distributed (i.i.d.). But in order to have good results, the estimation of the noise covariance matrix is very important and still remains a great challenge. Many estimation methods have been proposed in the past. The first type of methods assumes that the signal between neighbor pixels should be similar, so that the difference between neighborhood pixels or the high-frequency signals can be used to represent noise. These includes spatial and frequency domain high-pass filter, neighborhood pixel subtraction. The more practical method is based on the training samples and calculates the covariance matrix between each training sample and its class mean as the noise distribution, which is the within-class scatter matrix in Fisher's Linear Discriminant Analysis. But it is usually not easy to collect enough training samples to yield full rank covariance matrix. In this paper, we adopt the Nonparametric Weighted Feature Extraction (NWFE) to overcome the rank problem and it is also suitable to model the non-Gaussian noise. We have also compared the results with SPOT-5 image scene.
Partial Least Squares for Discrimination in fMRI Data
Andersen, Anders H.; Rayens, William S.; Liu, Yushu; Smith, Charles D.
2011-01-01
Multivariate methods for discrimination were used in the comparison of brain activation patterns between groups of cognitively normal women who are at either high or low Alzheimer's disease risk based on family history and apolipoprotein-E4 status. Linear discriminant analysis (LDA) was preceded by dimension reduction using either principal component analysis (PCA), partial least squares (PLS), or a new oriented partial least squares (OrPLS) method. The aim was to identify a spatial pattern of functionally connected brain regions that was differentially expressed by the risk groups and yielded optimal classification accuracy. Multivariate dimension reduction is required prior to LDA when the data contains more feature variables than there are observations on individual subjects. Whereas PCA has been commonly used to identify covariance patterns in neuroimaging data, this approach only identifies gross variability and is not capable of distinguishing among-groups from within-groups variability. PLS and OrPLS provide a more focused dimension reduction by incorporating information on class structure and therefore lead to more parsimonious models for discrimination. Performance was evaluated in terms of the cross-validated misclassification rates. The results support the potential of using fMRI as an imaging biomarker or diagnostic tool to discriminate individuals with disease or high risk. PMID:22227352
Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression.
Chen, Yanguang
2016-01-01
In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson's statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran's index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China's regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test.
NASA Technical Reports Server (NTRS)
Chen, Fang-Jenq
1997-01-01
Flow visualization produces data in the form of two-dimensional images. If the optical components of a camera system are perfect, the transformation equations between the two-dimensional image and the three-dimensional object space are linear and easy to solve. However, real camera lenses introduce nonlinear distortions that affect the accuracy of transformation unless proper corrections are applied. An iterative least-squares adjustment algorithm is developed to solve the nonlinear transformation equations incorporated with distortion corrections. Experimental applications demonstrate that a relative precision on the order of 40,000 is achievable without tedious laboratory calibrations of the camera.
NASA Astrophysics Data System (ADS)
Wu, You; Liu, Jun; Ge, Hui Yong
2016-12-01
Total least squares (TLS) is a technique that solves the traditional least squares (LS) problem for an errors-in-variables (EIV) model, in which both the observation vector and the design matrix are contaminated by random errors. Four- and seven-parameter models of coordinate transformation are typical EIV model. To determine which one of TLS and LS is more effective, taking the four- and seven-parameter models of Global Navigation Satellite System (GNSS) coordinate transformation with different coincidence pointsas examples, the relative effectiveness of the two methods was compared through simulation experiments. The results showed that in the EIV model, the errors-in-variables-only (EIVO) model and the errors-in-observations-only (EIOO) model, TLS is slightly inferior to LS in the four-parameter model coordinate transformation, and TLS is equivalent to LS in the seven-parameter model coordinate transformation. Consequently, in the four- and seven-parameter model coordinate transformation, TLS has no obvious advantage over LS.
Cognitive assessment in mathematics with the least squares distance method.
Ma, Lin; Çetin, Emre; Green, Kathy E
2012-01-01
This study investigated the validation of comprehensive cognitive attributes of an eighth-grade mathematics test using the least squares distance method and compared performance on attributes by gender and region. A sample of 5,000 students was randomly selected from the data of the 2005 Turkish national mathematics assessment of eighth-grade students. Twenty-five math items were assessed for presence or absence of 20 cognitive attributes (content, cognitive processes, and skill). Four attributes were found to be misspecified or nonpredictive. However, results demonstrated the validity of cognitive attributes in terms of the revised set of 17 attributes. The girls had similar performance on the attributes as the boys. The students from the two eastern regions significantly underperformed on the most attributes.
RNA structural motif recognition based on least-squares distance.
Shen, Ying; Wong, Hau-San; Zhang, Shaohong; Zhang, Lin
2013-09-01
RNA structural motifs are recurrent structural elements occurring in RNA molecules. RNA structural motif recognition aims to find RNA substructures that are similar to a query motif, and it is important for RNA structure analysis and RNA function prediction. In view of this, we propose a new method known as RNA Structural Motif Recognition based on Least-Squares distance (LS-RSMR) to effectively recognize RNA structural motifs. A test set consisting of five types of RNA structural motifs occurring in Escherichia coli ribosomal RNA is compiled by us. Experiments are conducted for recognizing these five types of motifs. The experimental results fully reveal the superiority of the proposed LS-RSMR compared with four other state-of-the-art methods.
Regularized Partial Least Squares with an Application to NMR Spectroscopy
Allen, Genevera I.; Peterson, Christine; Vannucci, Marina; Maletić-Savatić, Mirjana
2014-01-01
High-dimensional data common in genomics, proteomics, and chemometrics often contains complicated correlation structures. Recently, partial least squares (PLS) and Sparse PLS methods have gained attention in these areas as dimension reduction techniques in the context of supervised data analysis. We introduce a framework for Regularized PLS by solving a relaxation of the SIMPLS optimization problem with penalties on the PLS loadings vectors. Our approach enjoys many advantages including flexibility, general penalties, easy interpretation of results, and fast computation in high-dimensional settings. We also outline extensions of our methods leading to novel methods for non-negative PLS and generalized PLS, an adoption of PLS for structured data. We demonstrate the utility of our methods through simulations and a case study on proton Nuclear Magnetic Resonance (NMR) spectroscopy data. PMID:24511361
Assessment of column selection systems using Partial Least Squares.
Žuvela, Petar; Liu, J Jay; Plenis, Alina; Bączek, Tomasz
2015-11-13
Column selection systems based on calculation of a scalar measure based on Euclidean distance between chromatographic columns, suffer from the same issue. For diverse values of their parameters, identical or near-identical values can be calculated. Proper use of chemometric methods can not only provide a remedy, but also reveal underlying correlation between them. In this work, parameters of a well-established column selection system (CSS) developed at Katholieke Universiteit Leuven (KUL CSS) have been directly correlated to parameters of selectivity (retention time, resolution, and peak/valley ratio) toward pharmaceuticals, by employing Partial Least Squares (PLS). Two case studies were evaluated, separation of alfuzosin, lamotrigine, and their impurities, respectively. Within them, comprehensive correlation structure was revealed, which was thoroughly interpreted, confirming a causal relationship between KUL parameters and parameters of column performance. Furthermore, it was shown that the developed methodology can be applied to any distance-based column selection system.
Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2016-01-01
A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.
Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations
NASA Astrophysics Data System (ADS)
Wang, Qiqi; Hu, Rui; Blonigan, Patrick
2014-06-01
The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned "least squares shadowing (LSS) problem". The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.
Flow Applications of the Least Squares Finite Element Method
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan
1998-01-01
The main thrust of the effort has been towards the development, analysis and implementation of the least-squares finite element method (LSFEM) for fluid dynamics and electromagnetics applications. In the past year, there were four major accomplishments: 1) special treatments in computational fluid dynamics and computational electromagnetics, such as upwinding, numerical dissipation, staggered grid, non-equal order elements, operator splitting and preconditioning, edge elements, and vector potential are unnecessary; 2) the analysis of the LSFEM for most partial differential equations can be based on the bounded inverse theorem; 3) the finite difference and finite volume algorithms solve only two Maxwell equations and ignore the divergence equations; and 4) the first numerical simulation of three-dimensional Marangoni-Benard convection was performed using the LSFEM.
Morphological weighted penalized least squares for background correction.
Li, Zhong; Zhan, De-Jian; Wang, Jia-Jun; Huang, Jing; Xu, Qing-Song; Zhang, Zhi-Min; Zheng, Yi-Bao; Liang, Yi-Zeng; Wang, Hong
2013-08-21
Backgrounds existing in the analytical signal always impair the effectiveness of signals and compromise selectivity and sensitivity of analytical methods. In order to perform further qualitative or quantitative analysis, the background should be corrected with a reasonable method. For this purpose, a new automatic method for background correction, which is based on morphological operations and weighted penalized least squares (MPLS), has been developed in this paper. It requires neither prior knowledge about the background nor an iteration procedure or manual selection of a suitable local minimum value. The method has been successfully applied to simulated datasets as well as experimental datasets from different instruments. The results show that the method is quite flexible and could handle different kinds of backgrounds. The proposed MPLS method is implemented and available as an open source package at http://code.google.com/p/mpls.
A Galerkin least squares approach to viscoelastic flow.
Rao, Rekha R.; Schunk, Peter Randall
2015-10-01
A Galerkin/least-squares stabilization technique is applied to a discrete Elastic Viscous Stress Splitting formulation of for viscoelastic flow. From this, a possible viscoelastic stabilization method is proposed. This method is tested with the flow of an Oldroyd-B fluid past a rigid cylinder, where it is found to produce inaccurate drag coefficients. Furthermore, it fails for relatively low Weissenberg number indicating it is not suited for use as a general algorithm. In addition, a decoupled approach is used as a way separating the constitutive equation from the rest of the system. A Pressure Poisson equation is used when the velocity and pressure are sought to be decoupled, but this fails to produce a solution when inflow/outflow boundaries are considered. However, a coupled pressure-velocity equation with a decoupled constitutive equation is successful for the flow past a rigid cylinder and seems to be suitable as a general-use algorithm.
On convex least squares estimation when the truth is linear.
Chen, Yining; Wellner, Jon A
2016-01-01
We prove that the convex least squares estimator (LSE) attains a n(-1/2) pointwise rate of convergence in any region where the truth is linear. In addition, the asymptotic distribution can be characterized by a modified invelope process. Analogous results hold when one uses the derivative of the convex LSE to perform derivative estimation. These asymptotic results facilitate a new consistent testing procedure on the linearity against a convex alternative. Moreover, we show that the convex LSE adapts to the optimal rate at the boundary points of the region where the truth is linear, up to a log-log factor. These conclusions are valid in the context of both density estimation and regression function estimation.
Flow Applications of the Least Squares Finite Element Method
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan
1998-01-01
The main thrust of the effort has been towards the development, analysis and implementation of the least-squares finite element method (LSFEM) for fluid dynamics and electromagnetics applications. In the past year, there were four major accomplishments: 1) special treatments in computational fluid dynamics and computational electromagnetics, such as upwinding, numerical dissipation, staggered grid, non-equal order elements, operator splitting and preconditioning, edge elements, and vector potential are unnecessary; 2) the analysis of the LSFEM for most partial differential equations can be based on the bounded inverse theorem; 3) the finite difference and finite volume algorithms solve only two Maxwell equations and ignore the divergence equations; and 4) the first numerical simulation of three-dimensional Marangoni-Benard convection was performed using the LSFEM.
Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations
Wang, Qiqi Hu, Rui Blonigan, Patrick
2014-06-15
The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.
Shah, D.K.; Chen, D.J.; Chan, W.S.
1994-12-31
This paper applies the finite element least-square extrapolation and smoothing technique to demonstrate its advantages in evaluation of interfacial stress distributions in composite laminates. The analysis uses the quasi-3D finite element modeling technique and complete 3-D analysis using ABAQUS to investigate the stress distributions in graphite/epoxy laminates. Linear (2 point integration) and quadratic (3 point integration) least square fits in 8-node quadrilaterals and 20 node solid isoparametric elements are demonstrated. The evaluation of transformation matrix from gaussian stresses to nodal stresses was performed using symbolic mathematics on `Mathematica`. The results show that use of extrapolation and smoothing offer better estimates of stress distributions and the interfacial stresses in composite laminates.
Method for exploiting bias in factor analysis using constrained alternating least squares algorithms
Keenan, Michael R.
2008-12-30
Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.
Evaluation of fatty proportion in fatty liver using least squares method with constraints.
Li, Xingsong; Deng, Yinhui; Yu, Jinhua; Wang, Yuanyuan; Shamdasani, Vijay
2014-01-01
Backscatter and attenuation parameters are not easily measured in clinical applications due to tissue inhomogeneity in the region of interest (ROI). A least squares method(LSM) that fits the echo signal power spectra from a ROI to a 3-parameter tissue model was used to get attenuation coefficient imaging in fatty liver. Since fat's attenuation value is higher than normal liver parenchyma, a reasonable threshold was chosen to evaluate the fatty proportion in fatty liver. Experimental results using clinical data of fatty liver illustrate that the least squares method can get accurate attenuation estimates. It is proved that the attenuation values have a positive correlation with the fatty proportion, which can be used to evaluate the syndrome of fatty liver.
A Pascal program for the least-squares evaluation of standard RBS spectra
NASA Astrophysics Data System (ADS)
Hnatowicz, V.; Havránek, V.; Kvítek, J.
1992-11-01
A computer program for least-squares fitting of energy spectra obtained in common Rutherford backscattering (RBS) analyses is described. The samples analyzed by RBS technique are considered to be made up of a finite number of layers, each with uniform composition. The RBS spectra are treated as a combination of variable number of three different basic figures (strip, bulge and Gaussian) which are represented by ad-hoc chosen analytical expressions. The initial parameter estimates are inserted by the operator (with an assistance of graphical support on a TV screen) and the result of the fit is displayed on the screen and stored as a table on hard disk.
Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).
Bevilacqua, Marta; Marini, Federico
2014-08-01
The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks).
Analysis and computation of a least-squares method for consistent mesh tying
NASA Astrophysics Data System (ADS)
Day, David; Bochev, Pavel
2008-08-01
In the finite element method, a standard approach to mesh tying is to apply Lagrange multipliers. If the interface is curved, however, discretization generally leads to adjoining surfaces that do not coincide spatially. Straightforward Lagrange multiplier methods lead to discrete formulations failing a first-order patch test [T.A. Laursen, M.W. Heinstein, Consistent mesh-tying methods for topologically distinct discretized surfaces in non-linear solid mechanics, Internat. J. Numer. Methods Eng. 57 (2003) 1197-1242]. This paper presents a theoretical and computational study of a least-squares method for mesh tying [P. Bochev, D.M. Day, A least-squares method for consistent mesh tying, Internat. J. Numer. Anal. Modeling 4 (2007) 342-352], applied to the partial differential equation -[backward difference]2[phi]+[alpha][phi]=f. We prove optimal convergence rates for domains represented as overlapping subdomains and show that the least-squares method passes a patch test of the order of the finite element space by construction. To apply the method to subdomain configurations with gaps and overlaps we use interface perturbations to eliminate the gaps. Theoretical error estimates are illustrated by numerical experiments.
Generalized total least squares prediction algorithm for universal 3D similarity transformation
NASA Astrophysics Data System (ADS)
Wang, Bin; Li, Jiancheng; Liu, Chao; Yu, Jie
2017-02-01
Three-dimensional (3D) similarity datum transformation is extensively applied to transform coordinates from GNSS-based datum to a local coordinate system. Recently, some total least squares (TLS) algorithms have been successfully developed to solve the universal 3D similarity transformation problem (probably with big rotation angles and an arbitrary scale ratio). However, their procedures of the parameter estimation and new point (non-common point) transformation were implemented separately, and the statistical correlation which often exists between the common and new points in the original coordinate system was not considered. In this contribution, a generalized total least squares prediction (GTLSP) algorithm, which implements the parameter estimation and new point transformation synthetically, is proposed. All of the random errors in the original and target coordinates, and their variance-covariance information will be considered. The 3D transformation model in this case is abstracted as a kind of generalized errors-in-variables (EIV) model and the equation for new point transformation is incorporated into the functional model as well. Then the iterative solution is derived based on the Gauss-Newton approach of nonlinear least squares. The performance of GTLSP algorithm is verified in terms of a simulated experiment, and the results show that GTLSP algorithm can improve the statistical accuracy of the transformed coordinates compared with the existing TLS algorithms for 3D similarity transformation.
Amigo, José Manuel; Ravn, Carsten; Gallagher, Neal B; Bro, Rasmus
2009-05-21
In hyperspectral analysis, PLS-discriminant analysis (PLS-DA) is being increasingly used in conjunction with pure spectra where it is often referred to as PLS-Classification (PLS-Class). PLS-Class has been presented as a novel approach making it possible to obtain qualitative information about the distribution of the compounds in each pixel using little a priori knowledge about the image (only the pure spectrum of each compound is needed). In this short note it is shown that the PLS-Class model is the same as a straightforward classical least squares (CLS) model and it is highlighted that it is more appropriate to view this approach as CLS rather than PLS-DA. A real example illustrates the results of applying both PLS-Class and CLS.
Recursive least squares background prediction of univariate syndromic surveillance data
2009-01-01
Background Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Methods Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. Results We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. Conclusion The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold detection algorithm from the
Seismic Sensor orientation by complex linear least squares
NASA Astrophysics Data System (ADS)
Grigoli, Francesco; Cesca, Simone; Krieger, Lars; Olcay, Manuel; Tassara, Carlos; Sobiesiak, Monika; Dahm, Torsten
2014-05-01
Poorly known orientation of the horizontal components of seismic sensors is a common problem that limits data analysis and interpretation for several acquisition setups, including linear arrays of geophones deployed in borehole installations, ocean bottom seismometers deployed at the sea-floor and surface seismic arrays. To solve this problem we propose an inversion method based on complex linear least squares method. Relative orientation angles, with respect to a reference sensor, are retrieved by minimizing the l2-norm between the complex traces (hodograms) of adjacent pairs of sensors in a least-squares sense. The absolute orientations are obtained in a second step by the polarization analysis of stacked seismograms of a seismic event with known location. This methodology can be applied without restrictions, if the plane wave approximation for wavefields recorded by each pair of sensors is valid. In most cases, it is possible to satisfy this condition by low-pass filtering the recorded waveform. The main advantage of our methodology is that, finding the estimation of the relative orientations of seismic sensors in complex domain is a linear inverse problem, which allows a direct solution corresponding to the global minimum of a misfit function. It is also possible to use simultaneously more than one independent dataset (e.g. using several seismic events simultaneously) to better constrain the solution of the inverse problem itself. Furthermore, by a computational point of view, our method results faster than the relative orientation methods based on waveform cross-correlation. Our methodology can be also applied for testing the correct orientation/alignment of multicomponent land stations in seismological arrays or temporary networks and for determining the absolute orientation of OBS stations and borehole arrays. We first apply our method to real data resembling two different acquisition setups: a borehole sensor array deployed in a gas field located in the
Least-Squares Neutron Spectral Adjustment with STAYSL PNNL
NASA Astrophysics Data System (ADS)
Greenwood, L. R.; Johnson, C. D.
2016-02-01
The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002) [3], which was sponsored by the International Atomic Energy Agency (IAEA), though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF) [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up) within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the SigPhi Calculator
Prediction of solubility parameters using partial least square regression.
Tantishaiyakul, Vimon; Worakul, Nimit; Wongpoowarak, Wibul
2006-11-15
The total solubility parameter (delta) values were effectively predicted by using computed molecular descriptors and multivariate partial least squares (PLS) statistics. The molecular descriptors in the derived models included heat of formation, dipole moment, molar refractivity, solvent-accessible surface area (SA), surface-bounded molecular volume (SV), unsaturated index (Ui), and hydrophilic index (Hy). The values of these descriptors were computed by the use of HyperChem 7.5, QSPR Properties module in HyperChem 7.5, and Dragon Web version. The other two descriptors, hydrogen bonding donor (HD), and hydrogen bond-forming ability (HB) were also included in the models. The final reduced model of the whole data set had R(2) of 0.853, Q(2) of 0.813, root mean squared error from the cross-validation of the training set (RMSEcv(tr)) of 2.096 and RMSE of calibration (RMSE(tr)) of 1.857. No outlier was observed from this data set of 51 diverse compounds. Additionally, the predictive power of the developed model was comparable to the well recognized systems of Hansen, van Krevelen and Hoftyzer, and Hoy.
A duct mapping method using least squares support vector machines
NASA Astrophysics Data System (ADS)
Douvenot, RéMi; Fabbro, Vincent; Gerstoft, Peter; Bourlier, Christophe; Saillard, Joseph
2008-12-01
This paper introduces a "refractivity from clutter" (RFC) approach with an inversion method based on a pregenerated database. The RFC method exploits the information contained in the radar sea clutter return to estimate the refractive index profile. Whereas initial efforts are based on algorithms giving a good accuracy involving high computational needs, the present method is based on a learning machine algorithm in order to obtain a real-time system. This paper shows the feasibility of a RFC technique based on the least squares support vector machine inversion method by comparing it to a genetic algorithm on simulated and noise-free data, at 1 and 5 GHz. These data are simulated in the presence of ideal trilinear surface-based ducts. The learning machine is based on a pregenerated database computed using Latin hypercube sampling to improve the efficiency of the learning. The results show that little accuracy is lost compared to a genetic algorithm approach. The computational time of a genetic algorithm is very high, whereas the learning machine approach is real time. The advantage of a real-time RFC system is that it could work on several azimuths in near real time.
Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares
NASA Technical Reports Server (NTRS)
Orr, Jeb S.
2012-01-01
A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed
A recursive least squares-based demodulator for electrical tomography.
Xu, Lijun; Zhou, Haili; Cao, Zhang
2013-04-01
In this paper, a recursive least squares (RLS)-based demodulator is proposed for Electrical Tomography (ET) that employs sinusoidal excitation. The new demodulator can output preliminary demodulation results on amplitude and phase of a sinusoidal signal by processing the first two sampling data, and the demodulation precision and signal-to-noise ratio can be further improved by involving more sampling data in a recursive way. Thus trade-off between the speed and precision in demodulation of electrical parameters can be flexibly made according to specific requirement of an ET system. The RLS-based demodulator is suitable to be implemented in a field programmable gate array (FPGA). Numerical simulation was carried out to prove its feasibility and optimize the relevant parameters for hardware implementation, e.g., the precision of the fixed-point parameters, sampling rate, and resolution of the analog to digital convertor. A FPGA-based capacitance measurement circuit for electrical capacitance tomography was constructed to implement and validate the RLS-based demodulator. Both simulation and experimental results demonstrate that the proposed demodulator is valid and capable of making trade-off between demodulation speed and precision and brings more flexibility to the hardware design of ET systems.
Suppressing Anomalous Localized Waffle Behavior in Least Squares Wavefront Reconstructors
Gavel, D
2002-10-08
A major difficulty with wavefront slope sensors is their insensitivity to certain phase aberration patterns, the classic example being the waffle pattern in the Fried sampling geometry. As the number of degrees of freedom in AO systems grows larger, the possibility of troublesome waffle-like behavior over localized portions of the aperture is becoming evident. Reconstructor matrices have associated with them, either explicitly or implicitly, an orthogonal mode space over which they operate, called the singular mode space. If not properly preconditioned, the reconstructor's mode set can consist almost entirely of modes that each have some localized waffle-like behavior. In this paper we analyze the behavior of least-squares reconstructors with regard to their mode spaces. We introduce a new technique that is successful in producing a mode space that segregates the waffle-like behavior into a few ''high order'' modes, which can then be projected out of the reconstructor matrix. This technique can be adapted so as to remove any specific modes that are undesirable in the final reconstructor (such as piston, tip, and tilt for example) as well as suppress (the more nebulously defined) localized waffle behavior.
River flow time series using least squares support vector machines
NASA Astrophysics Data System (ADS)
Samsudin, R.; Saad, P.; Shabri, A.
2011-06-01
This paper proposes a novel hybrid forecasting model known as GLSSVM, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM). The GMDH is used to determine the useful input variables which work as the time series forecasting for the LSSVM model. Monthly river flow data from two stations, the Selangor and Bernam rivers in Selangor state of Peninsular Malaysia were taken into consideration in the development of this hybrid model. The performance of this model was compared with the conventional artificial neural network (ANN) models, Autoregressive Integrated Moving Average (ARIMA), GMDH and LSSVM models using the long term observations of monthly river flow discharge. The root mean square error (RMSE) and coefficient of correlation (R) are used to evaluate the models' performances. In both cases, the new hybrid model has been found to provide more accurate flow forecasts compared to the other models. The results of the comparison indicate that the new hybrid model is a useful tool and a promising new method for river flow forecasting.
Battery state-of-charge estimation using approximate least squares
NASA Astrophysics Data System (ADS)
Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.
2015-03-01
In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.
Neither fixed nor random: weighted least squares meta-regression.
Stanley, T D; Doucouliagos, Hristos
2017-03-01
Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd.
Non-parametric and least squares Langley plot methods
NASA Astrophysics Data System (ADS)
Kiedron, P. W.; Michalsky, J. J.
2015-04-01
Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V=V>/i>0e-τ ·m, where a plot of ln (V) voltage vs. m air mass yields a straight line with intercept ln (V0). This ln (V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The eleven techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln (V0)'s are smoothed and interpolated with median and mean moving window filters.
Non-parametric and least squares Langley plot methods
NASA Astrophysics Data System (ADS)
Kiedron, P. W.; Michalsky, J. J.
2016-01-01
Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V = V0e-τ ṡ m, where a plot of ln(V) voltage vs. m air mass yields a straight line with intercept ln(V0). This ln(V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The 11 techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln(V0)'s are smoothed and interpolated with median and mean moving window filters.
NASA Astrophysics Data System (ADS)
Machado, A. E. de A.; da Gama, A. A. de S.; de Barros Neto, B.
2011-09-01
A partial least squares regression analysis of a large set of donor-acceptor organic molecules was performed to predict the magnitude of their static first hyperpolarizabilities ( β's). Polyenes, phenylpolyenes and biphenylpolyenes with augmented chain lengths displayed large β values, in agreement with the available experimental data. The regressors used were the HOMO-LUMO energy gap, the ground-state dipole moment, the HOMO energy AM1 values and the number of π-electrons. The regression equation predicts quite well the static β values for the molecules investigated and can be used to model new organic-based materials with enhanced nonlinear responses.
Application of the Marquardt least-squares method to the estimation of pulse function parameters
NASA Astrophysics Data System (ADS)
Lundengârd, Karl; Rančić, Milica; Javor, Vesna; Silvestrov, Sergei
2014-12-01
Application of the Marquardt least-squares method (MLSM) to the estimation of non-linear parameters of functions used for representing various lightning current waveshapes is presented in this paper. Parameters are determined for the Pulse, Heidler's and DEXP function representing the first positive, first and subsequent negative stroke currents as given in IEC 62305-1 Standard Ed.2, and also for some other fast- and slow-decaying lightning current waveshapes. The results prove the ability of the MLSM to be used for the estimation of parameters of the functions important in lightning discharge modeling.
Optimization of absorption placement using geometrical acoustic models and least squares.
Saksela, Kai; Botts, Jonathan; Savioja, Lauri
2015-04-01
Given a geometrical model of a space, the problem of optimally placing absorption in a space to match a desired impulse response is in general nonlinear. This has led some to use costly optimization procedures. This letter reformulates absorption assignment as a constrained linear least-squares problem. Regularized solutions result in direct distribution of absorption in the room and can accommodate multiple frequency bands, multiple sources and receivers, and constraints on geometrical placement of absorption. The method is demonstrated using a beam tracing model, resulting in the optimal absorption placement on the walls and ceiling of a classroom.
Fast Dating Using Least-Squares Criteria and Algorithms.
To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier
2016-01-01
Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that
Fast Dating Using Least-Squares Criteria and Algorithms
To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier
2016-01-01
Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley–Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley–Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to
Least-squares reverse time migration in elastic media
NASA Astrophysics Data System (ADS)
Ren, Zhiming; Liu, Yang; Sen, Mrinal K.
2017-02-01
Elastic reverse time migration (RTM) can yield accurate subsurface information (e.g. PP and PS reflectivity) by imaging the multicomponent seismic data. However, the existing RTM methods are still insufficient to provide satisfactory results because of the finite recording aperture, limited bandwidth and imperfect illumination. Besides, the P- and S-wave separation and the polarity reversal correction are indispensable in conventional elastic RTM. Here, we propose an iterative elastic least-squares RTM (LSRTM) method, in which the imaging accuracy is improved gradually with iteration. We first use the Born approximation to formulate the elastic de-migration operator, and employ the Lagrange multiplier method to derive the adjoint equations and gradients with respect to reflectivity. Then, an efficient inversion workflow (only four forward computations needed in each iteration) is introduced to update the reflectivity. Synthetic and field data examples reveal that the proposed LSRTM method can obtain higher-quality images than the conventional elastic RTM. We also analyse the influence of model parametrizations and misfit functions in elastic LSRTM. We observe that Lamé parameters, velocity and impedance parametrizations have similar and plausible migration results when the structures of different models are correlated. For an uncorrelated subsurface model, velocity and impedance parametrizations produce fewer artefacts caused by parameter crosstalk than the Lamé coefficient parametrization. Correlation- and convolution-type misfit functions are effective when amplitude errors are involved and the source wavelet is unknown, respectively. Finally, we discuss the dependence of elastic LSRTM on migration velocities and its antinoise ability. Imaging results determine that the new elastic LSRTM method performs well as long as the low-frequency components of migration velocities are correct. The quality of images of elastic LSRTM degrades with increasing noise.
Least-squares reverse time migration in elastic media
NASA Astrophysics Data System (ADS)
Ren, Zhiming; Liu, Yang; Sen, Mrinal K.
2016-11-01
Elastic reverse time migration (RTM) can yield more subsurface information (e.g. PP and PS reflectivity) by imaging the multi-component seismic data. However, the existing RTM methods are still insufficient to provide satisfactory results because of the finite recording aperture, limited bandwidth and imperfect illumination. Besides, the P- and S-wave separation and the polarity reversal correction are indispensable in conventional elastic RTM. Here, we propose an iterative elastic least-squares RTM (LSRTM) method, in which the imaging accuracy is improved gradually with iteration. We first use the Born approximation to formulate the elastic de-migration operator, and employ the Lagrange multiplier method to derive the adjoint equations and gradients with respect to reflectivity. Then, an efficient inversion workflow (only four forward computations needed in each iteration) is introduced to update the reflectivity. Synthetic and field data examples reveal that the proposed LSRTM method can obtain higher-quality images than the conventional elastic RTM. We also analyze the influence of model parameterizations and misfit functions in elastic LSRTM. We observe that Lamé parameters, velocity and impedance parameterizations have similar and plausible migration results when the structures of different models are correlated. For an uncorrelated subsurface model, velocity and impedance parameterizations produce fewer artifacts caused by parameter crosstalk than the Lamé coefficient parameterization. Correlation- and convolution-type misfit functions are effective when amplitude errors are involved and the source wavelet is unknown, respectively. Finally, we discuss the dependence of elastic LSRTM on migration velocities and its anti-noise ability. Imaging results determine that the new elastic LSRTM method performs well as long as the low-frequency components of migration velocities are correct. The quality of images of elastic LSRTM degrades with increasing noise.
The moving-least-squares-particle hydrodynamics method (MLSPH)
Dilts, G.
1997-12-31
An enhancement of the smooth-particle hydrodynamics (SPH) method has been developed using the moving-least-squares (MLS) interpolants of Lancaster and Salkauskas which simultaneously relieves the method of several well-known undesirable behaviors, including spurious boundary effects, inaccurate strain and rotation rates, pressure spikes at impact boundaries, and the infamous tension instability. The classical SPH method is derived in a novel manner by means of a Galerkin approximation applied to the Lagrangian equations of motion for continua using as basis functions the SPH kernel function multiplied by the particle volume. This derivation is then modified by simply substituting the MLS interpolants for the SPH Galerkin basis, taking care to redefine the particle volume and mass appropriately. The familiar SPH kernel approximation is now equivalent to a colocation-Galerkin method. Both classical conservative and recent non-conservative formulations of SPH can be derived and emulated. The non-conservative forms can be made conservative by adding terms that are zero within the approximation at the expense of boundary-value considerations. The familiar Monaghan viscosity is used. Test calculations of uniformly expanding fluids, the Swegle example, spinning solid disks, impacting bars, and spherically symmetric flow illustrate the superiority of the technique over SPH. In all cases it is seen that the marvelous ability of the MLS interpolants to add up correctly everywhere civilizes the noisy, unpredictable nature of SPH. Being a relatively minor perturbation of the SPH method, it is easily retrofitted into existing SPH codes. On the down side, computational expense at this point is significant, the Monaghan viscosity undoes the contribution of the MLS interpolants, and one-point quadrature (colocation) is not accurate enough. Solutions to these difficulties are being pursued vigorously.
Least squares in calibration: dealing with uncertainty in x.
Tellinghuisen, Joel
2010-08-01
The least-squares (LS) analysis of data with error in x and y is generally thought to yield best results when carried out by minimizing the "total variance" (TV), defined as the sum of the properly weighted squared residuals in x and y. Alternative "effective variance" (EV) methods project the uncertainty in x into an effective contribution to that in y, and though easier to employ are considered to be less reliable. In the case of a linear response function with both sigma(x) and sigma(y) constant, the EV solutions are identically those from ordinary LS; and Monte Carlo (MC) simulations reveal that they can actually yield smaller root-mean-square errors than the TV method. Furthermore, the biases can be predicted from theory based on inverse regression--x upon y when x is error-free and y is uncertain--which yields a bias factor proportional to the ratio sigma(x)(2)/sigma(xm)(2) of the random-error variance in x to the model variance. The MC simulations confirm that the biases are essentially independent of the error in y, hence correctable. With such bias corrections, the better performance of the EV method in estimating the parameters translates into better performance in estimating the unknown (x(0)) from measurements (y(0)) of its response. The predictability of the EV parameter biases extends also to heteroscedastic y data as long as sigma(x) remains constant, but the estimation of x(0) is not as good in this case. When both x and y are heteroscedastic, there is no known way to predict the biases. However, the MC simulations suggest that for proportional error in x, a geometric x-structure leads to small bias and comparable performance for the EV and TV methods.
Finding A Minimally Informative Dirichlet Prior Using Least Squares
Dana Kelly
2011-03-01
In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson \\lambda, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.
Finding a Minimally Informative Dirichlet Prior Distribution Using Least Squares
Dana Kelly; Corwin Atwood
2011-03-01
In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straight-forward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in closed form, and so an approximate beta distribution is used in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial aleatory model for common-cause failure, must be estimated from data that is often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.
Modified fast frequency acquisition via adaptive least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor)
1992-01-01
A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.
Michaelis-Menten kinetics, the operator-repressor system, and least squares approaches.
Hadeler, Karl Peter
2013-01-01
The Michaelis-Menten (MM) function is a fractional linear function depending on two positive parameters. These can be estimated by nonlinear or linear least squares methods. The non-linear methods, based directly on the defect of the MM function, can fail and not produce any minimizer. The linear methods always produce a unique minimizer which, however, may not be positive. Here we give sufficient conditions on the data such that the nonlinear problem has at least one positive minimizer and also conditions for the minimizer of the linear problem to be positive. We discuss in detail the models and equilibrium relations of a classical operator-repressor system, and we extend our approach to the MM problem with leakage and to reversible MM kinetics. The arrangement of the sufficient conditions exhibits the important role of data that have a concavity property (chemically feasible data).
Analyzing industrial energy use through ordinary least squares regression models
NASA Astrophysics Data System (ADS)
Golden, Allyson Katherine
Extensive research has been performed using regression analysis and calibrated simulations to create baseline energy consumption models for residential buildings and commercial institutions. However, few attempts have been made to discuss the applicability of these methodologies to establish baseline energy consumption models for industrial manufacturing facilities. In the few studies of industrial facilities, the presented linear change-point and degree-day regression analyses illustrate ideal cases. It follows that there is a need in the established literature to discuss the methodologies and to determine their applicability for establishing baseline energy consumption models of industrial manufacturing facilities. The thesis determines the effectiveness of simple inverse linear statistical regression models when establishing baseline energy consumption models for industrial manufacturing facilities. Ordinary least squares change-point and degree-day regression methods are used to create baseline energy consumption models for nine different case studies of industrial manufacturing facilities located in the southeastern United States. The influence of ambient dry-bulb temperature and production on total facility energy consumption is observed. The energy consumption behavior of industrial manufacturing facilities is only sometimes sufficiently explained by temperature, production, or a combination of the two variables. This thesis also provides methods for generating baseline energy models that are straightforward and accessible to anyone in the industrial manufacturing community. The methods outlined in this thesis may be easily replicated by anyone that possesses basic spreadsheet software and general knowledge of the relationship between energy consumption and weather, production, or other influential variables. With the help of simple inverse linear regression models, industrial manufacturing facilities may better understand their energy consumption and
Data-adapted moving least squares method for 3-D image interpolation
NASA Astrophysics Data System (ADS)
Jang, Sumi; Nam, Haewon; Lee, Yeon Ju; Jeong, Byeongseon; Lee, Rena; Yoon, Jungho
2013-12-01
In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons.
An experiment-library least-squares method on on-line coal element contents analysis
NASA Astrophysics Data System (ADS)
Wang, HuiDong; Lu, JingBin; Lu, YuPing; Yang, Dong; Ma, KeYan; Yang, Kang; Liu, YuMin; Cheng, DaoWen
2012-11-01
A new experiment-library least-squares (experiment-LLs) method used in the neutron inelastic-scattering and thermal-capture analysis (NITA) technique for on-line coal analysis was developed, which has significantly decreased the non-linear radiation effects. In this method, sixty samples with preset elemental contents were made, from whose experimental spectra twenty single-element spectrum libraries corresponding to twenty kinds of coal were built using the least-squares method. The spectrum of the unknown sample was analyzed based on these twenty libraries to estimate its element contents. With the initial estimated result, the procedure of developing library and analysis was iterated to build a new library with the closest element contents to the unknown sample. Hence the experiment-LLs method can reduce non-linear radiation effects. The experiment-LLs method was performed on an improved coal analysis system which was equipped with a long-life 14 MeV pulsed-neutron generator, a bulk BGO detector with a temperature controller, a moisture meter and coal-smoothing device. The precisions of this system for ash content, water content, volatile content and calorific value have reached 1.0wt%, 0.5wt%, 1.0wt%, 350 kJ/kg, respectively.
Wang, Yin; Zhao, Nan-jing; Liu, Wen-qing; Yu, Yang; Fang, Li; Meng, De-shuo; Hu, Li; Zhang, Da-hai; Ma, Min-jun; Xiao, Xue; Wang, Yu; Liu, Jian-guo
2015-02-01
In recent years, the technology of laser induced breakdown spectroscopy has been developed rapidly. As one kind of new material composition detection technology, laser induced breakdown spectroscopy can simultaneously detect multi elements fast and simply without any complex sample preparation and realize field, in-situ material composition detection of the sample to be tested. This kind of technology is very promising in many fields. It is very important to separate, fit and extract spectral feature lines in laser induced breakdown spectroscopy, which is the cornerstone of spectral feature recognition and subsequent elements concentrations inversion research. In order to realize effective separation, fitting and extraction of spectral feature lines in laser induced breakdown spectroscopy, the original parameters for spectral lines fitting before iteration were analyzed and determined. The spectral feature line of' chromium (Cr I : 427.480 nm) in fly ash gathered from a coal-fired power station, which was overlapped with another line(FeI: 427.176 nm), was separated from the other one and extracted by using damped least squares method. Based on Gauss-Newton iteration, damped least squares method adds damping factor to step and adjust step length dynamically according to the feedback information after each iteration, in order to prevent the iteration from diverging and make sure that the iteration could converge fast. Damped least squares method helps to obtain better results of separating, fitting and extracting spectral feature lines and give more accurate intensity values of these spectral feature lines: The spectral feature lines of chromium in samples which contain different concentrations of chromium were separated and extracted. And then, the intensity values of corresponding spectral lines were given by using damped least squares method and least squares method separately. The calibration curves were plotted, which showed the relationship between spectral
FOSLS (first-order systems least squares): An overivew
Manteuffel, T.A.
1996-12-31
The process of modeling a physical system involves creating a mathematical model, forming a discrete approximation, and solving the resulting linear or nonlinear system. The mathematical model may take many forms. The particular form chosen may greatly influence the ease and accuracy with which it may be discretized as well as the properties of the resulting linear or nonlinear system. If a model is chosen incorrectly it may yield linear systems with undesirable properties such as nonsymmetry or indefiniteness. On the other hand, if the model is designed with the discretization process and numerical solution in mind, it may be possible to avoid these undesirable properties.
The covariance matrix for the solution vector of an equality-constrained least-squares problem
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1976-01-01
Methods are given for computing the covariance matrix for the solution vector of an equality-constrained least squares problem. The methods are matched to the solution algorithms given in the book, 'Solving Least Squares Problems.'
NASA Astrophysics Data System (ADS)
Kamiński, M.; Szafran, J.
2015-05-01
The main purpose of this work is to verify the influence of the weighting procedure in the Least Squares Method on the probabilistic moments resulting from the stability analysis of steel skeletal structures. We discuss this issue also in the context of the geometrical nonlinearity appearing in the Stochastic Finite Element Method equations for the stability analysis and preservation of the Gaussian probability density function employed to model the Young modulus of a structural steel in this problem. The weighting procedure itself (with both triangular and Dirac-type) shows rather marginal influence on all probabilistic coefficients under consideration. This hybrid stochastic computational technique consisting of the FEM and computer algebra systems (ROBOT and MAPLE packages) may be used for analogous nonlinear analyses in structural reliability assessment.
An Aggregate Constraint Method for Inequality-constrained Least Squares Problems
NASA Astrophysics Data System (ADS)
Peng, Junhuan; Zhang, Hongping; Shong, Suli; Guo, Chunxi
2006-03-01
The inequality-constrained least squares (ICLS) problem can be solved by the simplex algorithm of quadratic programming. The ICLS problem may also be reformulated as a Bayesian problem and solved by using the Bayesian principle. This paper proposes using the aggregate constraint method of non-linear programming to solve the ICLS problem by converting many inequality constraints into one equality constraint, which is a basic augmented Lagrangean algorithm for deriving the solution to equality-constrained non-linear programming problems. Since the new approach finds the active constraints, we can derive the approximate algorithm-dependent statistical properties of the solution. As a result, some conclusions about the superiority of the estimator can be approximately made. Two simulated examples are given to show how to compute the approximate statistical properties and to show that the reasonable inequality constraints can improve the results of geodetic network with an ill-conditioned normal matrix.
ERIC Educational Resources Information Center
Serdahl, Eric
The information that is gained through various analyses of the residual scores yielded by the least squares regression model is explored. In fact, the most widely used methods for detecting data that do not fit this model are based on an analysis of residual scores. First, graphical methods of residual analysis are discussed, followed by a review…
A tutorial history of least squares with applications to astronomy and geodesy
NASA Astrophysics Data System (ADS)
Nievergelt, Yves
2000-09-01
This article surveys the history, development, and applications of least squares, including ordinary, constrained, weighted, and total least squares. The presentation includes proofs of the basic theory, in particular, unitary factorizations and singular-value decompositions of matrices. Numerical examples with real data demonstrate how to set up and solve several types of problems of least squares. The bibliography lists comprehensive sources for more specialized aspects of least squares.
Parallel Nonnegative Least Squares Solvers for Model Order Reduction
2016-03-01
NNLS problems that arise when the Energy Conserving Sampling and Weighting hyper-reduction procedure is used when constructing a reduced-order model...mesh where the nonlinear terms need to be evaluated when integrating the ROM, and their values are weighting factors for each element; hence, a sparse... Weighting , Lawson and Hanson, projected quasi-Newton, ScaLAPACK. 42 James P Collins 410-278-5061Unclassified Unclassified Unclassified UU ii Approved for
Huang, Kang; Wang, Hui-jun; Xu, Hui-rong; Wang, Jian-ping; Ying, Yi-bin
2009-04-01
The application of least square support vector machines (LS-SVM) regression method based on statistics study theory to the analysis with near infrared (NIR) spectra of tomato juice was introduced in the present paper. In this method, LS-SVM was used for establishing model of spectral analysis, and was applied to predict the sugar contents (SC) and available acid (VA) in tomato juice samples. NIR transmission spectra of tomato juice were measured in the spectral range of 800-2,500 nm using InGaAs detector. The radial basis function (RBF) was adopted as a kernel function of LS-SVM. Sixty seven tomato juice samples were used as calibration set, and thirty three samples were used as validation set. The results of the method for sugar contents (SC) and available acid (VA) prediction were: a high correlation coefficient of 0.9903 and 0.9675, and a low root mean square error of prediction (RMSEP) of 0.0056 degree Brix and 0.0245, respectively. And compared to PLS and PCR methods, the performance of the LSSVM method was better. The results indicated that it was possible to built statistic models to quantify some common components in tomato juice using near-infrared (NIR) spectroscopy and least square support vector machines (LS-SVM) regression method as a nonlinear multivariate calibration procedure, and LS-SVM could be a rapid and accurate method for juice components determination based on NIR spectra.
NASA Astrophysics Data System (ADS)
Li, Shuai; Li, Lin; Milliken, Ralph; Song, Kaishan
2012-09-01
The goal of this study is to develop an efficient and accurate model for using visible-near infrared reflectance spectra to estimate the abundance of minerals on the lunar surface. Previous studies using partial least squares (PLS) and genetic algorithm-partial least squares (GA-PLS) models for this purpose revealed several drawbacks. PLS has two limitations: (1) redundant spectral bands cannot be removed effectively and (2) nonlinear spectral mixing (i.e., intimate mixtures) cannot be accommodated. Incorporating GA into the model is an effective way for selecting a set of spectral bands that are the most sensitive to variations in the presence/abundance of lunar minerals and to some extent overcomes the first limitation. Given the fact that GA-PLS is still subject to the effect of nonlinearity, here we develop and test a hybrid partial least squares-back propagation neural network (PLS-BPNN) model to determine the effectiveness of BPNN for overcoming the two limitations simultaneously. BPNN takes nonlinearity into account with sigmoid functions, and the weights of redundant spectral bands are significantly decreased through the back propagation learning process. PLS, GA-PLS and PLS-BPNN are tested with the Lunar Soil Characterization Consortium dataset (LSCC), which includes VIS-NIR reflectance spectra and mineralogy for various soil size fractions and the accuracy of the models are assessed based on R2 and root mean square error values. The PLS-BPNN model is further tested with 12 additional Apollo soil samples. The results indicate that: (1) PLS-BPNN exhibits the best performance compared with PLS and GA-PLS for retrieving abundances of minerals that are dominant on the lunar surface; (2) PLS-BPNN can overcome the two limitations of PLS; (3) PLS-BPNN has the capability to accommodate spectral effects resulting from variations in particle size. By analyzing PLS beta coefficients, spectral bands selected by GA, and the loading curve of the latent variable with the
Rauk, Adam P; Guo, Kevin; Hu, Yanling; Cahya, Suntara; Weiss, William F
2014-08-01
Defining a suitable product presentation with an acceptable stability profile over its intended shelf-life is one of the principal challenges in bioproduct development. Accelerated stability studies are routinely used as a tool to better understand long-term stability. Data analysis often employs an overall mass action kinetics description for the degradation and the Arrhenius relationship to capture the temperature dependence of the observed rate constant. To improve predictive accuracy and precision, the current work proposes a least-squares estimation approach with a single nonlinear covariate and uses a polynomial to describe the change in a product attribute with respect to time. The approach, which will be referred to as Arrhenius time-scaled (ATS) least squares, enables accurate, precise predictions to be achieved for degradation profiles commonly encountered during bioproduct development. A Monte Carlo study is conducted to compare the proposed approach with the common method of least-squares estimation on the logarithmic form of the Arrhenius equation and nonlinear estimation of a first-order model. The ATS least squares method accommodates a range of degradation profiles, provides a simple and intuitive approach for data presentation, and can be implemented with ease.
Models of spectral unmixing: simplex versus least squares method of resolution
NASA Astrophysics Data System (ADS)
Lavreau, Johan
1995-01-01
Spectral unmixing is referred to in textbooks as a straightforward technique the application of which encounters apparently no problem. Operational applications are however scarce in the literature. The method usually used is based on the least square method of minimizing the error in search of the best fit solution. This method, however, poses problems when applied to real data when the number of end-members increases and/or the composition of end-members is similar. An alternative method based on linear algebra has several advantages: (1) no inversion of matrix is required, no meaningless values are thus generated; (2) not only a condition of the closed system can be introduced, but the end-members remain independent (i.e., the last one is not the complement to 1 of the sum of the other, as in the least square method); (3) a condition of positive value of the weights can be imposed. The latter condition yields a supplementary equation to the system, one more end-member may be taken into account, thus improving both the qualitative and the quantitative aspects of the mixture problem. Examples based on Landsat TM imagery are shown in the fields of vegetation monitoring (subtraction of the vegetal component in the landscape) and spectral geology in arid terrains (end-members being defined through a principal components analysis of the image).
Garcia, E; Klaas, I; Amigo, J M; Bro, R; Enevoldsen, C
2014-12-01
Lameness causes decreased animal welfare and leads to higher production costs. This study explored data from an automatic milking system (AMS) to model on-farm gait scoring from a commercial farm. A total of 88 cows were gait scored once per week, for 2 5-wk periods. Eighty variables retrieved from AMS were summarized week-wise and used to predict 2 defined classes: nonlame and clinically lame cows. Variables were represented with 2 transformations of the week summarized variables, using 2-wk data blocks before gait scoring, totaling 320 variables (2 × 2 × 80). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3 or 4/4) or not lame (score 1/4). Both models achieved sensitivity and specificity values around 80%, both in calibration and cross-validation. At the optimum values in the receiver operating characteristic curve, the false-positive rate was 28% in the parity 1 model, whereas in the parity 2 model it was about half (16%), which makes it more suitable for practical application; the model error rates were, 23 and 19%, respectively. Based on data registered automatically from one AMS farm, we were able to discriminate nonlame and lame cows, where partial least squares discriminant analysis achieved similar performance to the reference method.
NASA Astrophysics Data System (ADS)
Khawaja, Taimoor Saleem
A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior
Sparse partial least squares regression for simultaneous dimension reduction and variable selection.
Chun, Hyonho; Keleş, Sündüz
2010-01-01
Partial least squares regression has been an alternative to ordinary least squares for handling multicollinearity in several areas of scientific research since the 1960s. It has recently gained much attention in the analysis of high dimensional genomic data. We show that known asymptotic consistency of the partial least squares estimator for a univariate response does not hold with the very large p and small n paradigm. We derive a similar result for a multivariate response regression with partial least squares. We then propose a sparse partial least squares formulation which aims simultaneously to achieve good predictive performance and variable selection by producing sparse linear combinations of the original predictors. We provide an efficient implementation of sparse partial least squares regression and compare it with well-known variable selection and dimension reduction approaches via simulation experiments. We illustrate the practical utility of sparse partial least squares regression in a joint analysis of gene expression and genomewide binding data.
Garrido, M; Larrechi, M S; Rius, F X
2006-02-01
This study describes the combination of multivariate curve resolution-alternating least squares with a kinetic modeling strategy for obtaining the kinetic rate constants of a curing reaction of epoxy resins. The reaction between phenyl glycidyl ether and aniline is monitored by near-infrared spectroscopy under isothermal conditions for several initial molar ratios of the reagents. The data for all experiments, arranged in a column-wise augmented data matrix, are analyzed using multivariate curve resolution-alternating least squares. The concentration profiles recovered are fitted to a chemical model proposed for the reaction. The selection of the kinetic model is assisted by the information contained in the recovered concentration profiles. The nonlinear fitting provides the kinetic rate constants. The optimized rate constants are in agreement with values reported in the literature.
Genetic and least squares algorithms for estimating spectral EIS parameters of prostatic tissues.
Halter, Ryan J; Hartov, Alex; Paulsen, Keith D; Schned, Alan; Heaney, John
2008-06-01
We employed electrical impedance spectroscopy (EIS) to evaluate the electrical properties of prostatic tissues. We collected freshly excised prostates from 23 men immediately following radical prostatectomy. The prostates were sectioned into 3 mm slices and electrical property measurements of complex resistivity were recorded from each of the slices using an impedance probe over the frequency range of 100 Hz to 100 kHz. The area probed was marked so that following tissue fixation and slide preparation, histological assessment could be correlated directly with the recorded EIS spectra. Prostate cancer (CaP), benign prostatic hyperplasia (BPH), non-hyperplastic glandular tissue and stroma were the primary prostatic tissue types probed. Genetic and least squares parameter estimation algorithms were implemented for fitting a Cole-type resistivity model to the measured data. The four multi-frequency-based spectral parameters defining the recorded spectrum (rho(infinity), Deltarho, f(c) and alpha) were determined using these algorithms and statistically analyzed with respect to the tissue type. Both algorithms fit the measured data well, with the least squares algorithm having a better average goodness of fit (95.2 mOmega m versus 109.8 mOmega m) and a faster execution time (80.9 ms versus 13 637 ms) than the genetic algorithm. The mean parameters, from all tissue samples, estimated using the genetic algorithm ranged from 4.44 to 5.55 Omega m, 2.42 to 7.14 Omega m, 3.26 to 6.07 kHz and 0.565 to 0.654 for rho(infinity), Deltarho, f(c) and alpha, respectively. These same parameters estimated using the least squares algorithm ranged from 4.58 to 5.79 Omega m, 2.18 to 6.98 Omega m, 2.97 to 5.06 kHz and 0.621 to 0.742 for rho(infinity), Deltarho, f(c) and alpha, respectively. The ranges of these parameters were similar to those reported in the literature. Further, significant differences (p < 0.01) were observed between CaP and BPH for the spectral parameters Deltarho and f
NASA Astrophysics Data System (ADS)
Ismail, S.; Samsudin, R.; Shabri, A.
2010-10-01
Successful river flow time series forecasting is a major goal and an essential procedure that is necessary in water resources planning and management. This study introduced a new hybrid model based on a combination of two familiar non-linear method of mathematical modeling: Self Organizing Map (SOM) and Least Square Support Vector Machine (LSSVM) model referred as SOM-LSSVM model. The hybrid model uses the SOM algorithm to cluster the training data into several disjointed clusters and the individual LSSVM is used to forecast the river flow. The feasibility of this proposed model is evaluated to actual river flow data from Bernam River located in Selangor, Malaysia. Their results have been compared to those obtained using LSSVM and artificial neural networks (ANN) models. The experiment results show that the SOM-LSSVM model outperforms other models for forecasting river flow. It also indicates that the proposed model can forecast more precisely and provides a promising alternative technique in river flow forecasting.
Hasegawa, K; Funatsu, K
2000-01-01
Quantitative structure-activity relationship (QSAR) studies based on chemometric techniques are reviewed. Partial least squares (PLS) is introduced as a novel robust method to replace classical methods such as multiple linear regression (MLR). Advantages of PLS compared to MLR are illustrated with typical applications. Genetic algorithm (GA) is a novel optimization technique which can be used as a search engine in variable selection. A novel hybrid approach comprising GA and PLS for variable selection developed in our group (GAPLS) is described. The more advanced method for comparative molecular field analysis (CoMFA) modeling called GA-based region selection (GARGS) is described as well. Applications of GAPLS and GARGS to QSAR and 3D-QSAR problems are shown with some representative examples. GA can be hybridized with nonlinear modeling methods such as artificial neural networks (ANN) for providing useful tools in chemometric and QSAR.
Baseline configuration for GNSS attitude determination with an analytical least-squares solution
NASA Astrophysics Data System (ADS)
Chang, Guobin; Xu, Tianhe; Wang, Qianxin
2016-12-01
The GNSS attitude determination using carrier phase measurements with 4 antennas is studied on condition that the integer ambiguities have been resolved. The solution to the nonlinear least-squares is often obtained iteratively, however an analytical solution can exist for specific baseline configurations. The main aim of this work is to design this class of configurations. Both single and double difference measurements are treated which refer to the dedicated and non-dedicated receivers respectively. More realistic error models are employed in which the correlations between different measurements are given full consideration. The desired configurations are worked out. The configurations are rotation and scale equivariant and can be applied to both the dedicated and non-dedicated receivers. For these configurations, the analytical and optimal solution for the attitude is also given together with its error variance-covariance matrix.
Spline based least squares integration for two-dimensional shape or wavefront reconstruction
Huang, Lei; Xue, Junpeng; Gao, Bo; Zuo, Chao; Idir, Mourad
2016-12-21
In this paper, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. The noise influence is studied by adding white Gaussian noise to the slope data. Finally, experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.
Spline based least squares integration for two-dimensional shape or wavefront reconstruction
Huang, Lei; Xue, Junpeng; Gao, Bo; ...
2016-12-21
In this paper, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. Themore » noise influence is studied by adding white Gaussian noise to the slope data. Finally, experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.« less
Spline based least squares integration for two-dimensional shape or wavefront reconstruction
NASA Astrophysics Data System (ADS)
Huang, Lei; Xue, Junpeng; Gao, Bo; Zuo, Chao; Idir, Mourad
2017-04-01
In this work, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. The noise influence is studied by adding white Gaussian noise to the slope data. Experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.
From least squares to multilevel modeling: A graphical introduction to Bayesian inference
NASA Astrophysics Data System (ADS)
Loredo, Thomas J.
2016-01-01
This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.
Probabilistic partial least squares regression for quantitative analysis of Raman spectra.
Li, Shuo; Nyagilo, James O; Dave, Digant P; Wang, Wei; Zhang, Baoju; Gao, Jean
2015-01-01
With the latest development of Surface-Enhanced Raman Scattering (SERS) technique, quantitative analysis of Raman spectra has shown the potential and promising trend of development in vivo molecular imaging. Partial Least Squares Regression (PLSR) is state-of-the-art method. But it only relies on training samples, which makes it difficult to incorporate complex domain knowledge. Based on probabilistic Principal Component Analysis (PCA) and probabilistic curve fitting idea, we propose a probabilistic PLSR (PPLSR) model and an Estimation Maximisation (EM) algorithm for estimating parameters. This model explains PLSR from a probabilistic viewpoint, describes its essential meaning and provides a foundation to develop future Bayesian nonparametrics models. Two real Raman spectra datasets were used to evaluate this model, and experimental results show its effectiveness.
The Least-Squares Calibration on the Micro-Arcsecond Metrology Test Bed
NASA Technical Reports Server (NTRS)
Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.
2006-01-01
The Space Interferometry Mission (S1M) will measure optical path differences (OPDs) with an accuracy of tens of picometers, requiring precise calibration of the instrument. In this article, we present a calibration approach based on fitting star light interference fringes in the interferometer using a least-squares algorithm. The algorithm is first analyzed for the case of a monochromatic light source with a monochromatic fringe model. Using fringe data measured on the Micro-Arcsecond Metrology (MAM) testbed with a laser source, the error in the determination of the wavelength is shown to be less than 10pm. By using a quasi-monochromatic fringe model, the algorithm can be extended to the case of a white light source with a narrow detection bandwidth. In SIM, because of the finite bandwidth of each CCD pixel, the effect of the fringe envelope can not be neglected, especially for the larger optical path difference range favored for the wavelength calibration.
Borin, Alessandra; Ferrão, Marco Flôres; Mello, Cesar; Maretto, Danilo Althmann; Poppi, Ronei Jesus
2006-10-02
This paper proposes the use of the least-squares support vector machine (LS-SVM) as an alternative multivariate calibration method for the simultaneous quantification of some common adulterants (starch, whey or sucrose) found in powdered milk samples, using near-infrared spectroscopy with direct measurements by diffuse reflectance. Due to the spectral differences of the three adulterants a nonlinear behavior is present when all groups of adulterants are in the same data set, making the use of linear methods such as partial least squares regression (PLSR) difficult. Excellent models were built using LS-SVM, with low prediction errors and superior performance in relation to PLSR. These results show it possible to built robust models to quantify some common adulterants in powdered milk using near-infrared spectroscopy and LS-SVM as a nonlinear multivariate calibration procedure.
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
The program LOPT for least-squares optimization of energy levels
NASA Astrophysics Data System (ADS)
Kramida, A. E.
2011-02-01
The article describes a program that solves the least-squares optimization problem for finding the energy levels of a quantum-mechanical system based on a set of measured energy separations or wavelengths of transitions between those energy levels, as well as determining the Ritz wavelengths of transitions and their uncertainties. The energy levels are determined by solving the matrix equation of the problem, and the uncertainties of the Ritz wavenumbers are determined from the covariance matrix of the problem. Program summaryProgram title: LOPT Catalogue identifier: AEHM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 19 254 No. of bytes in distributed program, including test data, etc.: 427 839 Distribution format: tar.gz Programming language: Perl v.5 Computer: PC, Mac, Unix workstations Operating system: MS Windows (XP, Vista, 7), Mac OS X, Linux, Unix (AIX) RAM: 3 Mwords or more Word size: 32 or 64 Classification: 2.2 Nature of problem: The least-squares energy-level optimization problem, i.e., finding a set of energy level values that best fits the given set of transition intervals. Solution method: The solution of the least-squares problem is found by solving the corresponding linear matrix equation, where the matrix is constructed using a new method with variable substitution. Restrictions: A practical limitation on the size of the problem N is imposed by the execution time, which scales as N and depends on the computer. Unusual features: Properly rounds the resulting data and formats the output in a format suitable for viewing with spreadsheet editing software. Estimates numerical errors resulting from the limited machine precision. Running time: 1 s for N=100, or 60 s for N=400 on a typical PC.
De Beuckeleer, Liene I; Herrebout, Wouter A
2016-02-05
To rationalize the concentration dependent behavior observed for a large spectral data set of HCl recorded in liquid argon, least-squares based numerical methods are developed and validated. In these methods, for each wavenumber a polynomial is used to mimic the relation between monomer concentrations and measured absorbances. Least-squares fitting of higher degree polynomials tends to overfit and thus leads to compensation effects where a contribution due to one species is compensated for by a negative contribution of another. The compensation effects are corrected for by carefully analyzing, using AIC and BIC information criteria, the differences observed between consecutive fittings when the degree of the polynomial model is systematically increased, and by introducing constraints prohibiting negative absorbances to occur for the monomer or for one of the oligomers. The method developed should allow other, more complicated self-associating systems to be analyzed with a much higher accuracy than before.
NASA Astrophysics Data System (ADS)
Reyhancan, Iskender Atilla; Ebrahimi, Alborz; Çolak, Üner; Erduran, M. Nizamettin; Angin, Nergis
2017-01-01
A new Monte-Carlo Library Least Square (MCLLS) approach for treating non-linear radiation analysis problem in Neutron Inelastic-scattering and Thermal-capture Analysis (NISTA) was developed. 14 MeV neutrons were produced by a neutron generator via the 3H (2H , n) 4He reaction. The prompt gamma ray spectra from bulk samples of seven different materials were measured by a Bismuth Germanate (BGO) gamma detection system. Polyethylene was used as neutron moderator along with iron and lead as neutron and gamma ray shielding, respectively. The gamma detection system was equipped with a list mode data acquisition system which streams spectroscopy data directly to the computer, event-by-event. A GEANT4 simulation toolkit was used for generating the single-element libraries of all the elements of interest. These libraries were then used in a Linear Library Least Square (LLLS) approach with an unknown experimental sample spectrum to fit it with the calculated elemental libraries. GEANT4 simulation results were also used for the selection of the neutron shielding material.
Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David
2013-05-21
We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes.
NASA Astrophysics Data System (ADS)
Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David
2013-05-01
We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes.
Comparison of structural and least-squares lines for estimating geologic relations
Williams, G.P.; Troutman, B.M.
1990-01-01
Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
The numerical robustness of four generally applicable, recursive, least-squares-estimation schemes is analyzed by means of a theoretical round-off propagation study. This study highlights a number of practical, interesting insights of widely used recursive least-squares schemes. These insights have been confirmed in an experimental study as well.
NASA Astrophysics Data System (ADS)
Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.
2017-04-01
To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Kearney, Robert E.; Galiana, Henrietta L.
2005-01-01
A "Multimode" or "switched" system is one that switches between various modes of operation. When a switch occurs from one mode to another, a discontinuity may result followed by a smooth evolution under the new regime. Characterizing the switching behavior of these systems is not well understood and, therefore, identification of multimode systems typically requires a preprocessing step to classify the observed data according to a mode of operation. A further consequence of the switched nature of these systems is that data available for parameter estimation of any subsystem may be inadequate. As such, identification and parameter estimation of multimode systems remains an unresolved problem. In this paper, we 1) show that the NARMAX model structure can be used to describe the impulsive-smooth behavior of switched systems, 2) propose a modified extended least squares (MELS) algorithm to estimate the coefficients of such models, and 3) demonstrate its applicability to simulated and real data from the Vestibulo-Ocular Reflex (VOR). The approach will also allow the identification of other nonlinear bio-systems, suspected of containing "hard" nonlinearities.
First-Order System Least-Squares for Second-Order Elliptic Problems with Discontinuous Coefficients
NASA Technical Reports Server (NTRS)
Manteuffel, Thomas A.; McCormick, Stephen F.; Starke, Gerhard
1996-01-01
The first-order system least-squares methodology represents an alternative to standard mixed finite element methods. Among its advantages is the fact that the finite element spaces approximating the pressure and flux variables are not restricted by the inf-sup condition and that the least-squares functional itself serves as an appropriate error measure. This paper studies the first-order system least-squares approach for scalar second-order elliptic boundary value problems with discontinuous coefficients. Ellipticity of an appropriately scaled least-squares bilinear form of the size of the jumps in the coefficients leading to adequate finite element approximation results. The occurrence of singularities at interface corners and cross-points is discussed. and a weighted least-squares functional is introduced to handle such cases. Numerical experiments are presented for two test problems to illustrate the performance of this approach.
Hong, X; Chen, S; Sharkey, P M
2004-02-01
This paper introduces an automatic robust nonlinear identification algorithm using the leave-one-out test score also known as the PRESS (Predicted REsidual Sums of Squares) statistic and regularised orthogonal least squares. The proposed algorithm aims to achieve maximised model robustness via two effective and complementary approaches, parameter regularisation via ridge regression and model optimal generalisation structure selection. The major contributions are to derive the PRESS error in a regularised orthogonal weight model, develop an efficient recursive computation formula for PRESS errors in the regularised orthogonal least squares forward regression framework and hence construct a model with a good generalisation property. Based on the properties of the PRESS statistic the proposed algorithm can achieve a fully automated model construction procedure without resort to any other validation data set for model evaluation.
NASA Technical Reports Server (NTRS)
Wilson, Edward (Inventor)
2006-01-01
The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.
Bai, Yulei; Jia, Quanjie; Zhang, Yun; Huang, Qiquan; Yang, Qiyu; Ye, Shuangli; He, Zhaoshui; Zhou, Yanzhou; Xie, Shengli
2016-05-01
It is important to improve the depth resolution in depth-resolved wavenumber-scanning interferometry (DRWSI) owing to the limited range of wavenumber scanning. In this work, a new nonlinear iterative least-squares algorithm called the wavenumber-domain least-squares algorithm (WLSA) is proposed for evaluating the phase of DRWSI. The simulated and experimental results of the Fourier transform (FT), complex-number least-squares algorithm (CNLSA), eigenvalue-decomposition and least-squares algorithm (EDLSA), and WLSA were compared and analyzed. According to the results, the WLSA is less dependent on the initial values, and the depth resolution δz is approximately changed from δz to δz/6. Thus, the WLSA exhibits a better performance than the FT, CNLSA, and EDLSA.
Van Gestel, T; Suykens, J A K; Lanckriet, G; Lambrechts, A; De Moor, B; Vandewalle, J
2002-05-01
The Bayesian evidence framework has been successfully applied to the design of multilayer perceptrons (MLPs) in the work of MacKay. Nevertheless, the training of MLPs suffers from drawbacks like the nonconvex optimization problem and the choice of the number of hidden units. In support vector machines (SVMs) for classification, as introduced by Vapnik, a nonlinear decision boundary is obtained by mapping the input vector first in a nonlinear way to a high-dimensional kernel-induced feature space in which a linear large margin classifier is constructed. Practical expressions are formulated in the dual space in terms of the related kernel function, and the solution follows from a (convex) quadratic programming (QP) problem. In least-squares SVMs (LS-SVMs), the SVM problem formulation is modified by introducing a least-squares cost function and equality instead of inequality constraints, and the solution follows from a linear system in the dual space. Implicitly, the least-squares formulation corresponds to a regression formulation and is also related to kernel Fisher discriminant analysis. The least-squares regression formulation has advantages for deriving analytic expressions in a Bayesian evidence framework, in contrast to the classification formulations used, for example, in gaussian processes (GPs). The LS-SVM formulation has clear primal-dual interpretations, and without the bias term, one explicitly constructs a model that yields the same expressions as have been obtained with GPs for regression. In this article, the Bayesian evidence framework is combined with the LS-SVM classifier formulation. Starting from the feature space formulation, analytic expressions are obtained in the dual space on the different levels of Bayesian inference, while posterior class probabilities are obtained by marginalizing over the model parameters. Empirical results obtained on 10 public domain data sets show that the LS-SVM classifier designed within the Bayesian evidence
Least squares with non-normal data: estimating experimental variance functions.
Tellinghuisen, Joel
2008-02-01
Contrary to popular belief, the method of least squares (LS) does not require that the data have normally distributed (Gaussian) error for its validity. One practically important application of LS fitting that does not involve normal data is the estimation of data variance functions (VFE) from replicate statistics. If the raw data are normal, sampling estimates s(2) of the variance sigma(2) are chi(2) distributed. For small degrees of freedom, the chi(2) distribution is strongly asymmetrical -- exponential in the case of three replicates, for example. Monte Carlo computations for linear variance functions demonstrate that with proper weighting, the LS variance-function parameters remain unbiased, minimum-variance estimates of the true quantities. However, the parameters are strongly non-normal -- almost exponential for some parameters estimated from s(2) values derived from three replicates, for example. Similar LS estimates of standard deviation functions from estimated s values have a predictable and correctable bias stemming from the bias inherent in s as an estimator of sigma. Because s(2) and s have uncertainties proportional to their magnitudes, the VFE and SDFE fits require weighting as s(-4) and s(-2), respectively. However, these weights must be evaluated on the calculated functions rather than directly from the sampling estimates. The computation is thus iterative but usually converges in a few cycles, with remaining 'weighting' bias sufficiently small as to be of no practical consequence.
Li, Qing-Bo; Huang, Zheng-Wei
2014-02-01
In order to improve the prediction accuracy of quantitative analysis model in the near-infrared spectroscopy of blood glucose, this paper, by combining net analyte preprocessing (NAP) algorithm and radial basis functions partial least squares (RBFPLS) regression, builds a nonlinear model building method which is suitable for glucose measurement of human, named as NAP-RBFPLS. First, NAP is used to pre-process the near-infrared spectroscopy of blood glucose, in order to effectively extract the information which only relates to glucose signal from the original near-infrared spectra, so that it could effectively weaken the occasional correlation problems of the glucose changes and the interference factors which are caused by the absorption of water, albumin, hemoglobin, fat and other components of the blood in human body, the change of temperature of human body, the drift of measuring instruments, the changes of measuring environment, and the changes of measuring conditions; and then a nonlinear quantitative analysis model is built with the near-infrared spectroscopy data after NAP, in order to solve the nonlinear relationship between glucose concentrations and near-infrared spectroscopy which is caused by body strong scattering. In this paper, the new method is compared with other three quantitative analysis models building on partial least squares (PLS), net analyte preprocessing partial least squares (NAP-PLS) and RBFPLS respectively. At last, the experimental results show that the nonlinear calibration model, developed by combining NAP algorithm and RBFPLS regression, which was put forward in this paper, greatly improves the prediction accuracy of prediction sets, and what has been proved in this paper is that the nonlinear model building method will produce practical applications for the research of non-invasive detection techniques on human glucose concentrations.
NASA Astrophysics Data System (ADS)
Gipson, Geoffrey T.; Tatsuoka, Kay S.; Sweatman, Brian C.; Connor, Susan C.
2006-12-01
Biomarker discovery through analysis of high-throughput NMR data is a challenging, time-consuming process due to the requirement of sophisticated, dataset specific preprocessing techniques and the inherent complexity of the data. Here, we demonstrate the use of weighted, constrained least-squares for fitting a linear mixture of reference standard data to complex urine NMR spectra as an automated way of utilizing current assignment knowledge and the ability to deconvolve confounded spectral regions. Following the least-squares fit, univariate statistics were used to identify metabolites associated with group differences. This method was evaluated through applications on simulated datasets and a murine diabetes dataset. Furthermore, we examined the differential ability of various weighting metrics to correctly identify discriminative markers. Our findings suggest that the weighted least-squares approach is effective for identifying biochemical discriminators of varying physiological states. Additionally, the superiority of specific weighting metrics is demonstrated in particular datasets. An additional strength of this methodology is the ability for individual investigators to couple this analysis with laboratory specific preprocessing techniques.
The crux of the method: assumptions in ordinary least squares and logistic regression.
Long, Rebecca G
2008-10-01
Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.
Domain Decomposition Algorithms for First-Order System Least Squares Methods
NASA Technical Reports Server (NTRS)
Pavarino, Luca F.
1996-01-01
Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.
A class of least-squares filtering and identification algorithms with systolic array architectures
NASA Technical Reports Server (NTRS)
Kalson, Seth Z.; Yao, Kung
1991-01-01
A unified approach is presented for deriving a large class of new and previously known time- and order-recursive least-squares algorithms with systolic array architectures, suitable for high-throughput-rate and VLSI implementations of space-time filtering and system identification problems. The geometrical derivation given is unique in that no assumption is made concerning the rank of the sample data correlation matrix. This method utilizes and extends the concept of oblique projections, as used previously in the derivations of the least-squares lattice algorithms. Exponentially weighted least-squares criteria are considered for both sliding and growing memory.
Multilevel solvers of first-order system least-squares for Stokes equations
Lai, Chen-Yao G.
1996-12-31
Recently, The use of first-order system least squares principle for the approximate solution of Stokes problems has been extensively studied by Cai, Manteuffel, and McCormick. In this paper, we study multilevel solvers of first-order system least-squares method for the generalized Stokes equations based on the velocity-vorticity-pressure formulation in three dimensions. The least-squares functionals is defined to be the sum of the L{sup 2}-norms of the residuals, which is weighted appropriately by the Reynolds number. We develop convergence analysis for additive and multiplicative multilevel methods applied to the resulting discrete equations.
Least-squares methods involving the H{sup -1} inner product
Pasciak, J.
1996-12-31
Least-squares methods are being shown to be an effective technique for the solution of elliptic boundary value problems. However, the methods differ depending on the norms in which they are formulated. For certain problems, it is much more natural to consider least-squares functionals involving the H{sup -1} norm. Such norms give rise to improved convergence estimates and better approximation to problems with low regularity solutions. In addition, fewer new variables need to be added and less stringent boundary conditions need to be imposed. In this talk, I will describe some recent developments involving least-squares methods utilizing the H{sup -1} inner product.
On the interpretation of least squares collocation. [for geodetic data reduction
NASA Technical Reports Server (NTRS)
Tapley, B. D.
1976-01-01
A demonstration is given of the strict mathematical equivalence between the least squares collocation and the classical minimum variance estimates. It is shown that the least squares collocation algorithms are a special case of the modified minimum variance estimates. The computational efficiency of several forms of the general minimum variance estimation algorithm is discussed. It is pointed out that for certain geodetic applications the least square collocation algorithm may provide a more efficient formulation of the results from the point of view of the computations required.
A class of least-squares filtering and identification algorithms with systolic array architectures
NASA Technical Reports Server (NTRS)
Kalson, Seth Z.; Yao, Kung
1991-01-01
A unified approach is presented for deriving a large class of new and previously known time- and order-recursive least-squares algorithms with systolic array architectures, suitable for high-throughput-rate and VLSI implementations of space-time filtering and system identification problems. The geometrical derivation given is unique in that no assumption is made concerning the rank of the sample data correlation matrix. This method utilizes and extends the concept of oblique projections, as used previously in the derivations of the least-squares lattice algorithms. Exponentially weighted least-squares criteria are considered for both sliding and growing memory.
Multi-element array signal reconstruction with adaptive least-squares algorithms
NASA Technical Reports Server (NTRS)
Kumar, R.
1992-01-01
Two versions of the adaptive least-squares algorithm are presented for combining signals from multiple feeds placed in the focal plane of a mechanical antenna whose reflector surface is distorted due to various deformations. Coherent signal combining techniques based on the adaptive least-squares algorithm are examined for nearly optimally and adaptively combining the outputs of the feeds. The performance of the two versions is evaluated by simulations. It is demonstrated for the example considered that both of the adaptive least-squares algorithms are capable of offsetting most of the loss in the antenna gain incurred due to reflector surface deformations.
Multi-frequency Phase Unwrap from Noisy Data: Adaptive Least Squares Approach
NASA Astrophysics Data System (ADS)
Katkovnik, Vladimir; Bioucas-Dias, José
2010-04-01
Multiple frequency interferometry is, basically, a phase acquisition strategy aimed at reducing or eliminating the ambiguity of the wrapped phase observations or, equivalently, reducing or eliminating the fringe ambiguity order. In multiple frequency interferometry, the phase measurements are acquired at different frequencies (or wavelengths) and recorded using the corresponding sensors (measurement channels). Assuming that the absolute phase to be reconstructed is piece-wise smooth, we use a nonparametric regression technique for the phase reconstruction. The nonparametric estimates are derived from a local least squares criterion, which, when applied to the multifrequency data, yields denoised (filtered) phase estimates with extended ambiguity (periodized), compared with the phase ambiguities inherent to each measurement frequency. The filtering algorithm is based on local polynomial (LPA) approximation for design of nonlinear filters (estimators) and adaptation of these filters to unknown smoothness of the spatially varying absolute phase [9]. For phase unwrapping, from filtered periodized data, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [1]. Simulations give evidence that the proposed algorithm yields state-of-the-art performance for continuous as well as for discontinues phase surfaces, enabling phase unwrapping in extraordinary difficult situations when all other algorithms fail.
Hao, Ming; Wang, Yanli; Bryant, Stephen H
2016-02-25
Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision-recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets.
Least-squares/parabolized Navier-Stokes procedure for optimizing hypersonic wind tunnel nozzles
NASA Technical Reports Server (NTRS)
Korte, John J.; Kumar, Ajay; Singh, D. J.; Grossman, B.
1991-01-01
A new procedure is demonstrated for optimizing hypersonic wind-tunnel-nozzle contours. The procedure couples a CFD computer code to an optimization algorithm, and is applied to both conical and contoured hypersonic nozzles for the purpose of determining an optimal set of parameters to describe the surface geometry. A design-objective function is specified based on the deviation from the desired test-section flow-field conditions. The objective function is minimized by optimizing the parameters used to describe the nozzle contour based on the solution to a nonlinear least-squares problem. The effect of the changes in the nozzle wall parameters are evaluated by computing the nozzle flow using the parabolized Navier-Stokes equations. The advantage of the new procedure is that it directly takes into account the displacement effect of the boundary layer on the wall contour. The new procedure provides a method for optimizing hypersonic nozzles of high Mach numbers which have been designed by classical procedures, but are shown to produce poor flow quality due to the large boundary layers present in the test section. The procedure is demonstrated by finding the optimum design parameters for a Mach 10 conical nozzle and a Mach 6 and a Mach 15 contoured nozzle.
First-order system least-squares (FOSLS) for modeling blood flow.
Heys, J J; DeGroff, C G; Manteuffel, T A; McCormick, S F
2006-07-01
The modeling of blood flow through a compliant vessel requires solving a system of coupled nonlinear partial differential equations (PDEs). Traditional methods for solving the system of PDEs do not scale optimally, i.e., doubling the discrete problem size results in a computational time increase of more than a factor of 2. However, the development of multigrid algorithms and, more recently, the first-order system least-squares (FOSLS) finite-element formulation has enabled optimal computational scalability for an ever increasing set of problems. Previous work has demonstrated, and in some cases proved, optimal computational scalability in solving Stokes, Navier-Stokes, elasticity, and elliptic grid generation problems separately. Additionally, coupled fluid-elastic systems have been solved in an optimal manner in 2D for some geometries. This paper presents a FOSLS approach for solving a 3D model of blood flow in a compliant vessel. Blood is modeled as a Newtonian fluid, and the vessel wall is modeled as a linear elastic material of finite thickness. The approach is demonstrated on three different geometries, and optimal scalability is shown to occur over a range of problem sizes. The FOSLS formulation has other benefits, including that the functional is a sharp, a posteriori error measure.
Liao, Xiang; Wang, Qing; Fu, Ji-hong; Tang, Jun
2015-09-01
This work was undertaken to establish a quantitative analysis model which can rapid determinate the content of linalool, linalyl acetate of Xinjiang lavender essential oil. Totally 165 lavender essential oil samples were measured by using near infrared absorption spectrum (NIR), after analyzing the near infrared spectral absorption peaks of all samples, lavender essential oil have abundant chemical information and the interference of random noise may be relatively low on the spectral intervals of 7100~4500 cm(-1). Thus, the PLS models was constructed by using this interval for further analysis. 8 abnormal samples were eliminated. Through the clustering method, 157 lavender essential oil samples were divided into 105 calibration set samples and 52 validation set samples. Gas chromatography mass spectrometry (GC-MS) was used as a tool to determine the content of linalool and linalyl acetate in lavender essential oil. Then the matrix was established with the GC-MS raw data of two compounds in combination with the original NIR data. In order to optimize the model, different pretreatment methods were used to preprocess the raw NIR spectral to contrast the spectral filtering effect, after analysizing the quantitative model results of linalool and linalyl acetate, the root mean square error prediction (RMSEP) of orthogonal signal transformation (OSC) was 0.226, 0.558, spectrally, it was the optimum pretreatment method. In addition, forward interval partial least squares (FiPLS) method was used to exclude the wavelength points which has nothing to do with determination composition or present nonlinear correlation, finally 8 spectral intervals totally 160 wavelength points were obtained as the dataset. Combining the data sets which have optimized by OSC-FiPLS with partial least squares (PLS) to establish a rapid quantitative analysis model for determining the content of linalool and linalyl acetate in Xinjiang lavender essential oil, numbers of hidden variables of two
Semiclassical calculations of tunneling using interpolating moving least-squares potentials
NASA Astrophysics Data System (ADS)
Pham, Phong
The interpolating moving least-squares (IMLS) and Local-IMLS methods are incorporated into semiclassical trajectory simulation. Issues related to the implementation are investigated. Potential energy surface (PES) constructed by the IMLS/L-IMLS methods is used to study tunneling in polyatomic systems HONO and malonaldehyde, where direct dynamics becomes prohibitively expensive at high ab initio levels. To study cis--trans isomerization in HONO, the PES is constructed by L-IMLS fitting at the MP4(SDQ)/6-31++G(d,p) level with the HDMR(5,3,3) basis set. Results obtained can be compared with the others in the literature. Semiclassical rates are close to the referenced quantum mechanical ones. The isomerization is governed by energy transfer into the reaction coordinate---the torsional mode; the rate is strongly mode-selective, and much faster for the cis--trans direction than for the opposite one. To study the ground-state splitting of malonaldehyde, the PES is first constructed by single-level L-IMLS fitting at the MP2/6-31G(d,p) level with the HDMR(3,2) basis set. The dual-level method is then employed for increasing accuracy of the PES and reducing computational cost using MP4/6-31G(d,p) as the high level method. Results obtained can be compared with the others in the literature. For 0.5 kcal/mol fitting tolerance the splitting is 38.7 and 8.8 cm-1 at MP2 single-level, and 29.6 and 5.5 cm-1 at MP4 dual-level for H9 and D5D9 isotopomers respectively, compared to the experiment of 21.6 and 2.884 cm-1 . Splitting is within two times of the experiment and agrees with other quantum mechanical and semiclassical studies.
Areal Control Using Generalized Least Squares As An Alternative to Stratification
Raymond L. Czaplewski
2001-01-01
Stratification for both variance reduction and areal control proliferates the number of strata, which causes small sample sizes in many strata. This might compromise statistical efficiency. Generalized least squares can, in principle, replace stratification for areal control.
Difficulty Factors, Distribution Effects, and the Least Squares Simplex Data Matrix Solution
ERIC Educational Resources Information Center
Ten Berge, Jos M. F.
1972-01-01
In the present article it is argued that the Least Squares Simplex Data Matrix Solution does not deal adequately with difficulty factors inasmuch as the theoretical foundation is insufficient. (Author/CB)
Iterative least-squares solvers for the Navier-Stokes equations
Bochev, P.
1996-12-31
In the recent years finite element methods of least-squares type have attracted considerable attention from both mathematicians and engineers. This interest has been motivated, to a large extent, by several valuable analytic and computational properties of least-squares variational principles. In particular, finite element methods based on such principles circumvent Ladyzhenskaya-Babuska-Brezzi condition and lead to symmetric and positive definite algebraic systems. Thus, it is not surprising that numerical solution of fluid flow problems has been among the most promising and successful applications of least-squares methods. In this context least-squares methods offer significant theoretical and practical advantages in the algorithmic design, which makes resulting methods suitable, among other things, for large-scale numerical simulations.
Least-squares finite element discretizations of neutron transport equations in 3 dimensions
Manteuffel, T.A; Ressel, K.J.; Starkes, G.
1996-12-31
The least-squares finite element framework to the neutron transport equation introduced in is based on the minimization of a least-squares functional applied to the properly scaled neutron transport equation. Here we report on some practical aspects of this approach for neutron transport calculations in three space dimensions. The systems of partial differential equations resulting from a P{sub 1} and P{sub 2} approximation of the angular dependence are derived. In the diffusive limit, the system is essentially a Poisson equation for zeroth moment and has a divergence structure for the set of moments of order 1. One of the key features of the least-squares approach is that it produces a posteriori error bounds. We report on the numerical results obtained for the minimum of the least-squares functional augmented by an additional boundary term using trilinear finite elements on a uniform tesselation into cubes.
Chen, Shanqiu; Dong, LiZhi; Chen, XiaoJun; Tan, Yi; Liu, Wenjin; Wang, Shuai; Yang, Ping; Xu, Bing; Ye, YuTang
2016-04-10
Adaptive optics is an important technology for improving beam quality in solid-state slab lasers. However, there are uncorrectable aberrations in partial areas of the beam. In the criterion of the conventional least-squares reconstruction method, it makes the zones with small aberrations nonsensitive and hinders this zone from being further corrected. In this paper, a weighted least-squares reconstruction method is proposed to improve the relative sensitivity of zones with small aberrations and to further improve beam quality. Relatively small weights are applied to the zones with large residual aberrations. Comparisons of results show that peak intensity in the far field improved from 1242 analog digital units (ADU) to 2248 ADU, and beam quality β improved from 2.5 to 2.0. This indicates the weighted least-squares method has better performance than the least-squares reconstruction method when there are large zonal uncorrectable aberrations in the slab laser system.
Least squares evaluations for form and profile errors of ellipse using coordinate data
NASA Astrophysics Data System (ADS)
Liu, Fei; Xu, Guanghua; Liang, Lin; Zhang, Qing; Liu, Dan
2016-09-01
To improve the measurement and evaluation of form error of an elliptic section, an evaluation method based on least squares fitting is investigated to analyze the form and profile errors of an ellipse using coordinate data. Two error indicators for defining ellipticity are discussed, namely the form error and the profile error, and the difference between both is considered as the main parameter for evaluating machining quality of surface and profile. Because the form error and the profile error rely on different evaluation benchmarks, the major axis and the foci rather than the centre of an ellipse are used as the evaluation benchmarks and can accurately evaluate a tolerance range with the separated form error and profile error of workpiece. Additionally, an evaluation program based on the LS model is developed to extract the form error and the profile error of the elliptic section, which is well suited for separating the two errors by a standard program. Finally, the evaluation method about the form and profile errors of the ellipse is applied to the measurement of skirt line of the piston, and results indicate the effectiveness of the evaluation. This approach provides the new evaluation indicators for the measurement of form and profile errors of ellipse, which is found to have better accuracy and can thus be used to solve the difficult of the measurement and evaluation of the piston in industrial production.
On sufficient statistics of least-squares superposition of vector sets.
Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M
2015-06-01
The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.
Least-squares reverse time migration with and without source wavelet estimation
NASA Astrophysics Data System (ADS)
Zhang, Qingchen; Zhou, Hui; Chen, Hanming; Wang, Jie
2016-11-01
Least-squares reverse time migration (LSRTM) attempts to find the best fit reflectivity model by minimizing the mismatching between the observed and simulated seismic data, where the source wavelet estimation is one of the crucial issues. We divide the frequency-domain observed seismic data by the numerical Green's function at the receiver nodes to estimate the source wavelet for the conventional LSRTM method, and propose the source-independent LSRTM based on a convolution-based objective function. The numerical Green's function can be simulated with a dirac wavelet and the migration velocity in the frequency or time domain. Compared to the conventional method with the additional source estimation procedure, the source-independent LSRTM is insensitive to the source wavelet and can still give full play to the amplitude-preserving ability even using an incorrect wavelet without the source estimation. In order to improve the anti-noise ability, we apply the robust hybrid norm objective function to both the methods and use the synthetic seismic data contaminated by the random Gaussian and spike noises with a signal-to-noise ratio of 5 dB to verify their feasibilities. The final migration images show that the source-independent algorithm is more robust and has a higher amplitude-preserving ability than the conventional source-estimated method.
Nucleus detection using gradient orientation information and linear least squares regression
NASA Astrophysics Data System (ADS)
Kwak, Jin Tae; Hewitt, Stephen M.; Xu, Sheng; Pinto, Peter A.; Wood, Bradford J.
2015-03-01
Computerized histopathology image analysis enables an objective, efficient, and quantitative assessment of digitized histopathology images. Such analysis often requires an accurate and efficient detection and segmentation of histological structures such as glands, cells and nuclei. The segmentation is used to characterize tissue specimens and to determine the disease status or outcomes. The segmentation of nuclei, in particular, is challenging due to the overlapping or clumped nuclei. Here, we propose a nuclei seed detection method for the individual and overlapping nuclei that utilizes the gradient orientation or direction information. The initial nuclei segmentation is provided by a multiview boosting approach. The angle of the gradient orientation is computed and traced for the nuclear boundaries. Taking the first derivative of the angle of the gradient orientation, high concavity points (junctions) are discovered. False junctions are found and removed by adopting a greedy search scheme with the goodness-of-fit statistic in a linear least squares sense. Then, the junctions determine boundary segments. Partial boundary segments belonging to the same nucleus are identified and combined by examining the overlapping area between them. Using the final set of the boundary segments, we generate the list of seeds in tissue images. The method achieved an overall precision of 0.89 and a recall of 0.88 in comparison to the manual segmentation.
NASA Astrophysics Data System (ADS)
Polat, Esra; Gunay, Suleyman
2013-10-01
One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.
SIMULTANEOUS BACKSCATTER AND ATTENUATION ESTIMATION USING A LEAST SQUARES METHOD WITH CONSTRAINTS
Nam, Kibo; Zagzebski, James A.; Hall, Timothy J.
2011-01-01
Backscatter and attenuation variations are essential contrast mechanisms in ultrasound B-mode imaging. Emerging Quantitative Ultrasound methods extract and display absolute values of these tissue properties. However, in clinical applications, backscatter and attenuation parameters sometimes are not easily measured because of tissues inhomogeneities above the region of interest. We describe a least squares method (LSM) that fits the echo signal power spectra from a region of interest (ROI) to a 3-parameter tissue model that simultaneously yields estimates of attenuation losses and backscatter coefficients. To test the method, tissue-mimicking phantoms with backscatter and attenuation contrast as well as uniform phantoms were scanned with linear array transducers on a Siemens S2000. Attenuation and backscatter coefficients estimated by the LSM were compared with those derived using a reference phantom method (Yao et al. 1990). Results show that the LSM yields effective attenuation coefficients for uniform phantoms comparable to values derived using the reference phantom method. For layered phantoms exhibiting non-uniform backscatter, the LSM resulted in smaller attenuation estimation errors than the reference phantom method. Backscatter coefficients derived using the LSM were in excellent agreement with values obtained from laboratory measurements on test samples and with theory. The LSM is more immune to depth-dependent backscatter changes than commonly used reference phantom methods. PMID:21963038
Donato, David I.
2013-01-01
A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.
Extending the trend vector: The trend matrix and sample-based partial least squares
NASA Astrophysics Data System (ADS)
Sheridan, Robert P.; Nachbar, Robert B.; Bush, Bruce L.
1994-06-01
Trend vector analysis [Carhart, R.E. et al., J. Chem. Inf. Comput. Sci., 25 (1985) 64], in combination with topological descriptors such as atom pairs, has proved useful in drug discovery for ranking large collections of chemical compounds in order of predicted biological activity. The compounds with the highest predicted activities, upon being tested, often show a several-fold increase in the fraction of active compounds relative to a randomly selected set. A trend vector is simply the one-dimensional array of correlations between the biological activity of interest and a set of properties or `descriptors' of compounds in a training set. This paper examines two methods for generalizing the trend vector to improve the predicted rank order. The trend matrix method finds the correlations between the residuals and the simultaneous occurrence of descriptors, which are stored in a two-dimensional analog of the trend vector. The SAMPLS method derives a linear model by partial least squares (PLS), using the `sample-based' formulation of PLS [Bush, B.L. and Nachbar, R.B., J. Comput.-Aided Mol. Design, 7 (1993) 587] for efficiency in treating the large number of descriptors. PLS accumulates a predictive model as a sum of linear components. Expressed as a vector of prediction coefficients on properties, the first PLS component is proportional to the trend vector. Subsequent components adjust the model toward full least squares. For both methods the residuals decrease, while the risk of overfitting the training set increases. We therefore also describe statistical checks to prevent overfitting. These methods are applied to two data sets, a small homologous series of disubstituted piperidines, tested on the dopamine receptor, and a large set of diverse chemical structures, some of which are active at the muscarinic receptor. Each data set is split into a training set and a test set, and the activities in the test set are predicted from a fit on the training set. Both the trend
NASA Astrophysics Data System (ADS)
Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd
2017-08-01
The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.
Ramirez, L M; Wielopolski, L
2004-12-01
Potassium spectra with low counting statistics were measured with a NaI detector from a water phantom, simulating a brain, and were analyzed for error propagation in determination of K employing either the Trapezoidal Method or the Library Least-Squares method. We demonstrate, using measured and synthetic spectra, that a smaller error is obtained in the analysis of potassium when using the Library Least-Squares method.
On the accuracy of least squares methods in the presence of corner singularities
NASA Technical Reports Server (NTRS)
Cox, C. L.; Fix, G. J.
1985-01-01
Elliptic problems with corner singularities are discussed. Finite element approximations based on variational principles of the least squares type tend to display poor convergence properties in such contexts. Moreover, mesh refinement or the use of special singular elements do not appreciably improve matters. It is shown that if the least squares formulation is done in appropriately weighted space, then optimal convergence results in unweighted spaces like L(2).
Simplified Least Squares Shadowing sensitivity analysis for chaotic ODEs and PDEs
NASA Astrophysics Data System (ADS)
Chater, Mario; Ni, Angxiu; Wang, Qiqi
2017-01-01
This paper develops a variant of the Least Squares Shadowing (LSS) method, which has successfully computed the derivative for several chaotic ODEs and PDEs. The development in this paper aims to simplify Least Squares Shadowing method by improving how time dilation is treated. Instead of adding an explicit time dilation term as in the original method, the new variant uses windowing, which can be more efficient and simpler to implement, especially for PDEs.
Jiang, Lijian Li, Xinping
2015-08-01
Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in each subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain
A Least Squares Temporal Difference Actor-Critic Algorithm with Applications to Warehouse Management
2012-07-01
A Least Squares Temporal Difference Actor -Critic Algorithm with Applications to Warehouse Management ∗ Reza Moazzez Estanjini† Keyong Li‡ Ioannis Ch...the actor -critic type and uses a least squares temporal difference learning method. It operates on a sample-path of the system and optimizes the...converges more smoothly than earlier actor -critic algorithms while substantially outperforming heuristics used in practice. Keywords: Markov decision
Speckle evolution with multiple steps of least-squares phase removal
Chen Mingzhou; Dainty, Chris; Roux, Filippus S.
2011-08-15
We study numerically the evolution of speckle fields due to the annihilation of optical vortices after the least-squares phase has been removed. A process with multiple steps of least-squares phase removal is carried out to minimize both vortex density and scintillation index. Statistical results show that almost all the optical vortices can be removed from a speckle field, which finally decays into a quasiplane wave after such an iterative process.
An analysis of the least-squares problem for the DSN systematic pointing error model
NASA Technical Reports Server (NTRS)
Alvarez, L. S.
1991-01-01
A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.
A least squares fusion rule in multiple sensors distributed detection systems
NASA Astrophysics Data System (ADS)
Aziz, A. M.
In this paper, a new least square data fusion rule in multiple sensor distributed detection system is proposed. In the proposed approach, the central processor combines the sensors hard decisions through least squares criterion to make the global hard decision of the central processor. In contrast to the optimum Neyman-Pearson fusion, where the distributed detection system is optimized at the fusion center level or at the sensors level, but not simultaneously, the proposed approach achieves global optimization at both the fusion center and at the distributed sensors levels. This is done without knowing the error probabilities of each individual distributed sensor. Thus the proposed least squares fusion rule does not rely on any stability of the noise environment and of the sensors false alarm and detection probabilities. Therefore, the proposed least squares fusion rule is robust and achieves better global performance. Furthermore, the proposed method can easily be applied to any number of sensors and any type of distributed observations. The performance of the proposed least squares fusion rule is evaluated and compared to the optimum Neyman-Pearson fusion rule. The results show that the performance of the proposed least squares fusion rule outperforms the performance of the Neyman-Pearson fusion rule.
Naguib, Ibrahim A; Abdelrahman, Maha M; El Ghobashy, Mohamed R; Ali, Nesma A
2016-01-01
Two accurate, sensitive, and selective stability-indicating methods are developed and validated for simultaneous quantitative determination of agomelatine (AGM) and its forced degradation products (Deg I and Deg II), whether in pure forms or in pharmaceutical formulations. Partial least-squares regression (PLSR) and spectral residual augmented classical least-squares (SRACLS) are two chemometric models that are being subjected to a comparative study through handling UV spectral data in range (215-350 nm). For proper analysis, a three-factor, four-level experimental design was established, resulting in a training set consisting of 16 mixtures containing different ratios of interfering species. An independent test set consisting of eight mixtures was used to validate the prediction ability of the suggested models. The results presented indicate the ability of mentioned multivariate calibration models to analyze AGM, Deg I, and Deg II with high selectivity and accuracy. The analysis results of the pharmaceutical formulations were statistically compared to the reference HPLC method, with no significant differences observed regarding accuracy and precision. The SRACLS model gives comparable results to the PLSR model; however, it keeps the qualitative spectral information of the classical least-squares algorithm for analyzed components.
Duer, Wayne C; Ogren, Paul J; Meetze, Alison; Kitchen, Chester J; Von Lindern, Ryan; Yaworsky, Dustin C; Boden, Christopher; Gayer, Jeffery A
2008-06-01
The impact of experimental errors in one or both variables on the use of linear least-squares was investigated for method calibrations (response = intercept plus slope times concentration, or equivalently, Y = a(1) + a(2)X ) frequently used in analytical toxicology. In principle, the most reliable calibrations should consider errors from all sources, but consideration of concentration (X) uncertainties has not been common due to complex fitting algorithm requirements. Data were obtained for liquid chromatography-tandem mass spectrometry, gas chromatography-mass spectrometry, high-performance liquid chromatography, gas chromatography, and enzymatic assay. The required experimental uncertainties in response were obtained from replicate measurements. The required experimental uncertainties in concentration were determined from manufacturers' furnished uncertainties in stock solutions coupled with uncertainties imparted by dilution techniques. The mathematical fitting techniques used in the investigation were ordinary least-squares, weighted least-squares (WOLS), and generalized least-squares (GLS). GLS best-fit results, obtained with an efficient iteration algorithm implemented in a spreadsheet format, are used with a modified WOLS-based formula to derive reliable uncertainties in calculated concentrations. It was found that while the values of the intercepts and slopes were not markedly different for the different techniques, the derived uncertainties in parameters were different. Such differences can significantly affect the predicted uncertainties in concentrations derived from the use of the different linear least-squares equations.
Weighted least-squares in calibration: what difference does it make?
Tellinghuisen, Joel
2007-06-01
In univariate calibration, an unknown concentration or amount x(0) is estimated from its measured response y(0) by comparison with a calibration data set obtained in the same way for known x values. The calibration function y = f(x) contains parameters obtained from a least-squares (LS) fit of the calibration data. Since minimum-variance estimation requires that the data be weighted inversely as their true variances, any other weighting leads to predictable losses of precision in the calibration parameters and in the estimation of x(0). Incorrect weighting also invalidates the apparent standard errors returned by the LS calibration fit. Both effects are studied using Monte Carlo calculations. For the strongest commonly encountered heteroscedasticity, proportional error (sigma(i) proportional, varianty(i)), neglect of weights yields as much as an order of magnitude precision loss for x(0) in the small x region, but only nominal loss in the calibration mid-range. Use of replicates gives great improvement at small x but can underperform unweighted regression in the mid-to-large x region. Variance function estimation approximates minimum-variance, even though the true variance functions are not well reproduced. A relative error test applied to the calibration data themselves is predisposed to favor 1/y(2) (or 1/x(2)) weighting, even if the data are homoscedastic. This predisposition weakens when replicate measurements are taken and disappears when the test is applied to an independent set of data. The distinction between a priori and a posteriori parameter standard errors is emphasized. Where feasible, the a priori approach permits reliable assignment of weights, application of a chi(2) test, and use of the normal distribution for confidence limits.
On realizations of least-squares estimation and Kalman filtering by systolic arrays
NASA Technical Reports Server (NTRS)
Chen, M. J.; Yao, K.
1986-01-01
Least-squares (LS) estimation is a basic operation in many signal processing problems. Given y = Ax + v, where A is a m x n coefficient matrix, y is a m x 1 observation vector, and v is a m x 1 zero mean white noise vector, a simple least-squares solution is finding the estimated vector x which minimizes the norm of /Ax-y/. It is well known that for an ill-conditioned matrix A, solving least-squares problems by orthogonal triangular (QR) decomposition and back substitution has robust numerical properties under finite word length effect since 2-norm is preserved. Many fast algorithms have been proposed and applied to systolic arrays. Gentleman-Kung (1981) first presented the trianglular systolic array for a basic Givens reduction. McWhirter (1983) used this array structure to find the least-squares estimation errors. Then by geometric approach, several different systolic array realizations of the recursive least-squares estimation algorithms of Lee et al (1981) were derived by Kalson-Yao (1985). Basic QR decomposition algorithms are considered in this paper and it is found that under a one-row time updating situation, the Householder transformation degenerates to a simple Givens reduction. Next, an improved least-squares estimation algorithm is derived by considering a modified version of fast Givens reduction. From this approach, the basic relationship between Givens reduction and Modified-Gram-Schmidt transformation can easily be understood. This improved algorithm also has simpler computational and inter-cell connection complexities while compared with other known least-squares algorithms and is more realistic for systolic array implementation.
Parrish, Robert M.; Sherrill, C. David; Hohenstein, Edward G.; Kokkila, Sara I. L.; Martínez, Todd J.
2014-05-14
We apply orbital-weighted least-squares tensor hypercontraction decomposition of the electron repulsion integrals to accelerate the coupled cluster singles and doubles (CCSD) method. Using accurate and flexible low-rank factorizations of the electron repulsion integral tensor, we are able to reduce the scaling of the most vexing particle-particle ladder term in CCSD from O(N{sup 6}) to O(N{sup 5}), with remarkably low error. Combined with a T{sub 1}-transformed Hamiltonian, this leads to substantial practical accelerations against an optimized density-fitted CCSD implementation.
Comparison of Kriging and Moving Least Square Methods to Change the Geometry of Human Body Models.
Jolivet, Erwan; Lafon, Yoann; Petit, Philippe; Beillas, Philippe
2015-11-01
Finite Element Human Body Models (HBM) have become powerful tools to study the response to impact. However, they are typically only developed for a limited number of sizes and ages. Various approaches driven by control points have been reported in the literature for the non-linear scaling of these HBM into models with different geometrical characteristics. The purpose of this study is to compare the performances of commonly used control points based interpolation methods in different usage scenarios. Performance metrics include the respect of target, the mesh quality and the runability. For this study, the Kriging and Moving Least square interpolation approaches were compared in three test cases. The first two cases correspond to changes of anthropometric dimensions of (1) a child model (from 6 to 1.5 years old) and (2) the GHBMC M50 model (Global Human Body Models Consortium, from 50th to 5th percentile female). For the third case, the GHBMC M50 ribcage was scaled to match the rib cage geometry derived from a CT-scan. In the first two test cases, all tested methods provided similar shapes with acceptable results in terms of time needed for the deformation (a few minutes at most), overall respect of the targets, element quality distribution and time step for explicit simulation. The personalization of rib cage proved to be much more challenging. None of the methods tested provided fully satisfactory results at the level of the rib trajectory and section. There were corrugated local deformations unless using a smooth regression through relaxation. Overall, the results highlight the importance of the target definition over the interpolation method.
Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui
2017-06-13
The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.
Nguyen, Danh V
2005-01-01
An important application of DNA microarray technologies involves monitoring the global state of transcriptional program in tumor cells. One goal in cancer microarray studies is to compare the clinical outcome, such as relapse-free or overall survival, for subgroups of patients defined by global gene expression patterns. A method of comparing patient survival, as a function of gene expression, was recently proposed in [Bioinformatics 18 (2002) 1625] by Nguyen and Rocke. Due to the (a) high-dimensionality of microarray gene expression data and (b) censored survival times, a two-stage procedure was proposed to relate survival times to gene expression profiles. The first stage involves dimensionality reduction of the gene expression data by partial least squares (PLS) and the second stage involves prediction of survival probability using proportional hazard regression. In this paper, we provide a systematic assessment of the performance of this two-stage procedure. PLS dimension reduction involves complex non-linear functions of both the predictors and the response data, rendering exact analytical study intractable. Thus, we assess the methodology under a simulation model for gene expression data with a censored response variable. In particular, we compare the performance of PLS dimension reduction relative to dimension reduction via principal components analysis (PCA) and to a modified PLS (MPLS) approach. PLS performed substantially better relative to dimension reduction via PCA when the total predictor variance explained is low to moderate (e.g. 40%-60%). It performed similar to MPLS and slightly better in some cases. Additionally, we examine the effect of censoring on dimension reduction stage. The performance of all methods deteriorates for a high censoring rate, although PLS-PH performed relatively best overall.
NASA Technical Reports Server (NTRS)
Korte, J. J.; Auslender, A. H.
1993-01-01
A new optimization procedure, in which a parabolized Navier-Stokes solver is coupled with a non-linear least-squares optimization algorithm, is applied to the design of a Mach 14, laminar two-dimensional hypersonic subscale flight inlet with an internal contraction ratio of 15:1 and a length-to-throat half-height ratio of 150:1. An automated numerical search of multiple geometric wall contours, which are defined by polynomical splines, results in an optimal geometry that yields the maximum total-pressure recovery for the compression process. Optimal inlet geometry is obtained for both inviscid and viscous flows, with the assumption that the gas is either calorically or thermally perfect. The analysis with a calorically perfect gas results in an optimized inviscid inlet design that is defined by two cubic splines and yields a mass-weighted total-pressure recovery of 0.787, which is a 23% improvement compared with the optimized shock-canceled two-ramp inlet design. Similarly, the design procedure obtains the optimized contour for a viscous calorically perfect gas to yield a mass-weighted total-pressure recovery value of 0.749. Additionally, an optimized contour for a viscous thermally perfect gas is obtained to yield a mass-weighted total-pressure recovery value of 0.768. The design methodology incorporates both complex fluid dynamic physics and optimal search techniques without an excessive compromise of computational speed; hence, this methodology is a practical technique that is applicable to optimal inlet design procedures.
NASA Technical Reports Server (NTRS)
Tiffany, S. H.; Adams, W. M., Jr.
1984-01-01
A technique which employs both linear and nonlinear methods in a multilevel optimization structure to best approximate generalized unsteady aerodynamic forces for arbitrary motion is described. Optimum selection of free parameters is made in a rational function approximation of the aerodynamic forces in the Laplace domain such that a best fit is obtained, in a least squares sense, to tabular data for purely oscillatory motion. The multilevel structure and the corresponding formulation of the objective models are presented which separate the reduction of the fit error into linear and nonlinear problems, thus enabling the use of linear methods where practical. Certain equality and inequality constraints that may be imposed are identified; a brief description of the nongradient, nonlinear optimizer which is used is given; and results which illustrate application of the method are presented.
Least-square deconvolution: a framework for interpreting short tandem repeat mixtures.
Wang, Tsewei; Xue, Ning; Birdwell, J Douglas
2006-11-01
Interpreting mixture short tandem repeat DNA data is often a laborious process, involving trying different genotype combinations mixed at assumed DNA mass proportions, and assessing whether the resultant is supported well by the relative peak-height information of the mixture sample. If a clear pattern of major-minor alleles is apparent, it is feasible to identify the major alleles of each locus and form a composite genotype profile for the major contributor. When alleles are shared between the two contributors, and/or heterozygous peak imbalance is present, it becomes complex and difficult to deduce the profile of the minor contributor. The manual trial and error procedures performed by an analyst in the attempt to resolve mixture samples have been formalized in the least-square deconvolution (LSD) framework reported here for two-person mixtures, with the allele peak height (or area) information as its only input. LSD operates on the peak-data information of each locus separately, independent of all other loci, and finds the best-fit DNA mass proportions and calculates error residual for each possible genotype combination. The LSD mathematical result for all loci is then to be reviewed by a DNA analyst, who will apply a set of heuristic interpretation guidelines in an attempt to form a composite DNA profile for each of the two contributors. Both simulated and forensic peak-height data were used to support this approach. A set of heuristic guidelines is to be used in forming a composite profile for each of the mixture contributors in analyzing the mathematical results of LSD. The heuristic rules involve the checking of consistency of the best-fit mass proportion ratios for the top-ranked genotype combination case among all four- and three-allele loci, and involve assessing the degree of fit of the top-ranked case relative to the fit of the second-ranked case. A different set of guidelines is used in reviewing and analyzing the LSD mathematical results for two
Least squares regression methods for clustered ROC data with discrete covariates.
Tang, Liansheng Larry; Zhang, Wei; Li, Qizhai; Ye, Xuan; Chan, Leighton
2016-07-01
The receiver operating characteristic (ROC) curve is a popular tool to evaluate and compare the accuracy of diagnostic tests to distinguish the diseased group from the nondiseased group when test results from tests are continuous or ordinal. A complicated data setting occurs when multiple tests are measured on abnormal and normal locations from the same subject and the measurements are clustered within the subject. Although least squares regression methods can be used for the estimation of ROC curve from correlated data, how to develop the least squares methods to estimate the ROC curve from the clustered data has not been studied. Also, the statistical properties of the least squares methods under the clustering setting are unknown. In this article, we develop the least squares ROC methods to allow the baseline and link functions to differ, and more importantly, to accommodate clustered data with discrete covariates. The methods can generate smooth ROC curves that satisfy the inherent continuous property of the true underlying curve. The least squares methods are shown to be more efficient than the existing nonparametric ROC methods under appropriate model assumptions in simulation studies. We apply the methods to a real example in the detection of glaucomatous deterioration. We also derive the asymptotic properties of the proposed methods.
Sun, Liang; Ji, Shuiwang; Ye, Jieping
2011-01-01
Canonical Correlation Analysis (CCA) is a well-known technique for finding the correlations between two sets of multidimensional variables. It projects both sets of variables onto a lower-dimensional space in which they are maximally correlated. CCA is commonly applied for supervised dimensionality reduction in which the two sets of variables are derived from the data and the class labels, respectively. It is well-known that CCA can be formulated as a least-squares problem in the binary class case. However, the extension to the more general setting remains unclear. In this paper, we show that under a mild condition which tends to hold for high-dimensional data, CCA in the multilabel case can be formulated as a least-squares problem. Based on this equivalence relationship, efficient algorithms for solving least-squares problems can be applied to scale CCA to very large data sets. In addition, we propose several CCA extensions, including the sparse CCA formulation based on the 1-norm regularization. We further extend the least-squares formulation to partial least squares. In addition, we show that the CCA projection for one set of variables is independent of the regularization on the other set of multidimensional variables, providing new insights on the effect of regularization on CCA. We have conducted experiments using benchmark data sets. Experiments on multilabel data sets confirm the established equivalence relationships. Results also demonstrate the effectiveness and efficiency of the proposed CCA extensions.
Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time
NASA Technical Reports Server (NTRS)
Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.
1993-01-01
A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.
CORRIGENDUM: A weighted total least-squares algorithm for fitting a straight line
NASA Astrophysics Data System (ADS)
Krystek, M.; Anton, M.
2008-07-01
(1) Equation (22c) should be replaced by (The last term in the parentheses should have f replaced by v2.) (2) On page 3441: The value of the parameter a is -0.4805 and not -4.8054 as was erroneously stated. (3) Equation (26) should be replaced by (The expression should be squared.) The authors would like to apologize for the mistakes, and would also like to express their thanks to Dr Simon Iveson of the Department of Chemical Engineering, Faculty of Engineering & Built Environment, University of Newcastle, Callaghan, NSW 2308, Australia, who discovered the errors (1) and (2) above.
Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time
NASA Technical Reports Server (NTRS)
Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.
1993-01-01
A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.
A variant of sparse partial least squares for variable selection and data exploration
Olson Hunt, Megan J.; Weissfeld, Lisa; Boudreau, Robert M.; Aizenstein, Howard; Newman, Anne B.; Simonsick, Eleanor M.; Van Domelen, Dane R.; Thomas, Fridtjof; Yaffe, Kristine; Rosano, Caterina
2014-01-01
When data are sparse and/or predictors multicollinear, current implementation of sparse partial least squares (SPLS) does not give estimates for non-selected predictors nor provide a measure of inference. In response, an approach termed “all-possible” SPLS is proposed, which fits a SPLS model for all tuning parameter values across a set grid. Noted is the percentage of time a given predictor is chosen, as well as the average non-zero parameter estimate. Using a “large” number of multicollinear predictors, simulation confirmed variables not associated with the outcome were least likely to be chosen as sparsity increased across the grid of tuning parameters, while the opposite was true for those strongly associated. Lastly, variables with a weak association were chosen more often than those with no association, but less often than those with a strong relationship to the outcome. Similarly, predictors most strongly related to the outcome had the largest average parameter estimate magnitude, followed by those with a weak relationship, followed by those with no relationship. Across two independent studies regarding the relationship between volumetric MRI measures and a cognitive test score, this method confirmed a priori hypotheses about which brain regions would be selected most often and have the largest average parameter estimates. In conclusion, the percentage of time a predictor is chosen is a useful measure for ordering the strength of the relationship between the independent and dependent variables, serving as a form of inference. The average parameter estimates give further insight regarding the direction and strength of association. As a result, all-possible SPLS gives more information than the dichotomous output of traditional SPLS, making it useful when undertaking data exploration and hypothesis generation for a large number of potential predictors. PMID:24624079
Boccard, Julien; Rudaz, Serge
2016-05-12
Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models.
A method based on moving least squares for XRII image distortion correction
Yan Shiju; Wang Chengtao; Ye Ming
2007-11-15
This paper presents a novel integrated method to correct geometric distortions of XRII (x-ray image intensifier) images. The method has been compared, in terms of mean-squared residual error measured at control and intermediate points, with two traditional local methods and a traditional global methods. The proposed method is based on the methods of moving least squares (MLS) and polynomial fitting. Extensive experiments were performed on simulated and real XRII images. In simulation, the effect of pincushion distortion, sigmoidal distortion, local distortion, noise, and the number of control points was tested. The traditional local methods were sensitive to pincushion and sigmoidal distortion. The traditional global method was only sensitive to sigmoidal distortion. The proposed method was found neither sensitive to pincushion distortion nor sensitive to sigmoidal distortion. The sensitivity of the proposed method to local distortion was lower than or comparable with that of the traditional global method. The sensitivity of the proposed method to noise was higher than that of all three traditional methods. Nevertheless, provided the standard deviation of noise was not greater than 0.1 pixels, accuracy of the proposed method is still higher than the traditional methods. The sensitivity of the proposed method to the number of control points was greatly lower than that of the traditional methods. Provided that a proper cutoff radius is chosen, accuracy of the proposed method is higher than that of the traditional methods. Experiments on real images, carried out by using a 9 in. XRII, showed that residual error of the proposed method (0.2544{+-}0.2479 pixels) is lower than that of the traditional global method (0.4223{+-}0.3879 pixels) and local methods (0.4555{+-}0.3518 pixels and 0.3696{+-}0.4019 pixels, respectively)
Fruit fly optimization based least square support vector regression for blind image restoration
NASA Astrophysics Data System (ADS)
Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei
2014-11-01
The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and
A Two-Layer Least Squares Support Vector Machine Approach to Credit Risk Assessment
NASA Astrophysics Data System (ADS)
Liu, Jingli; Li, Jianping; Xu, Weixuan; Shi, Yong
Least squares support vector machine (LS-SVM) is a revised version of support vector machine (SVM) and has been proved to be a useful tool for pattern recognition. LS-SVM had excellent generalization performance and low computational cost. In this paper, we propose a new method called two-layer least squares support vector machine which combines kernel principle component analysis (KPCA) and linear programming form of least square support vector machine. With this method sparseness and robustness is obtained while solving large dimensional and large scale database. A U.S. commercial credit card database is used to test the efficiency of our method and the result proved to be a satisfactory one.
Spackman, K. A.
1991-01-01
This paper presents maximum likelihood back-propagation (ML-BP), an approach to training neural networks. The widely reported original approach uses least squares back-propagation (LS-BP), minimizing the sum of squared errors (SSE). Unfortunately, least squares estimation does not give a maximum likelihood (ML) estimate of the weights in the network. Logistic regression, on the other hand, gives ML estimates for single layer linear models only. This report describes how to obtain ML estimates of the weights in a multi-layer model, and compares LS-BP to ML-BP using several examples. It shows that in many neural networks, least squares estimation gives inferior results and should be abandoned in favor of maximum likelihood estimation. Questions remain about the potential uses of multi-level connectionist models in such areas as diagnostic systems and risk-stratification in outcomes research. PMID:1807606
Least-squares reverse-time migration of Cranfield VSP data for monitoring CO2 injection
NASA Astrophysics Data System (ADS)
TAN, S.; Huang, L.
2012-12-01
Cost-effective monitoring for carbon utilization and sequestration requires high-resolution imaging with a minimal amount of data. Least-squares reverse-time migration is a promising imaging method for this purpose. We apply least-squares reverse-time migration to a portion of the 3D vertical seismic profile data acquired at the Cranfield enhanced oil recovery field in Mississippi for monitoring CO2 injection. Conventional reverse-time migration of limited data suffers from significant image artifacts and a poor image resolution. Lease-squares reverse-time migration can reduce image artifacts and improves the image resolution. We demonstrate the significant improvements of least-squares reverse-time migration by comparing its migration images of the Cranfield VSP data with that obtained using the conventional reverse-time migration.
NASA Astrophysics Data System (ADS)
Ozcelikkale, Altug; Sert, Cuneyt
2012-05-01
Least-squares spectral element solution of steady, two-dimensional, incompressible flows are obtained by approximating velocity, pressure and vorticity variable set on Gauss-Lobatto-Legendre nodes. Constrained Approximation Method is used for h- and p-type nonconforming interfaces of quadrilateral elements. Adaptive solutions are obtained using a posteriori error estimates based on least squares functional and spectral coefficient. Effective use of p-refinement to overcome poor mass conservation drawback of least-squares formulation and successful use of h- and p-refinement together to solve problems with geometric singularities are demonstrated. Capabilities and limitations of the developed code are presented using Kovasznay flow, flow past a circular cylinder in a channel and backward facing step flow.
On the equivalence of Kalman filtering and least-squares estimation
NASA Astrophysics Data System (ADS)
Mysen, E.
2017-01-01
The Kalman filter is derived directly from the least-squares estimator, and generalized to accommodate stochastic processes with time variable memory. To complete the link between least-squares estimation and Kalman filtering of first-order Markov processes, a recursive algorithm is presented for the computation of the off-diagonal elements of the a posteriori least-squares error covariance. As a result of the algebraic equivalence of the two estimators, both approaches can fully benefit from the advantages implied by their individual perspectives. In particular, it is shown how Kalman filter solutions can be integrated into the normal equation formalism that is used for intra- and inter-technique combination of space geodetic data.
Influence of the least-squares phase on optical vortices in strongly scintillated beams
Chen Mingzhou; Roux, Filippus S.
2009-07-15
The optical vortices that exist in strongly scintillated beams make it difficult for conventional adaptive optics systems to remove the phase distortions. When the least-squares reconstructed phase is removed, the vortices still remain. However, we found that the removal of the least-squares phase induces a portion of the vortices to be annihilated during subsequent propagation, causing a reduction in the total number of vortices. This can be understood in terms of the restoration of equilibrium between explicit vortices, which are visible in the phase function, and vortex bound states, which are somehow encoded in the continuous phase fluctuations. Numerical simulations are provided to show that the total number of optical vortices in a strongly scintillated beam can be reduced significantly after a few steps of least-squares phase corrections.
Yang, Lei; Lu, Jun; Dai, Ming; Ren, Li-Jie; Liu, Wei-Zong; Li, Zhen-Zhou; Gong, Xue-Hao
2016-10-06
An ultrasonic image speckle noise removal method by using total least squares model is proposed and applied onto images of cardiovascular structures such as the carotid artery. On the basis of the least squares principle, the related principle of minimum square method is applied to cardiac ultrasound image speckle noise removal process to establish the model of total least squares, orthogonal projection transformation processing is utilized for the output of the model, and the denoising processing for the cardiac ultrasound image speckle noise is realized. Experimental results show that the improved algorithm can greatly improve the resolution of the image, and meet the needs of clinical medical diagnosis and treatment of the cardiovascular system for the head and neck. Furthermore, the success in imaging of carotid arteries has strong implications in neurological complications such as stroke.
A note on implementation of decaying product correlation structures for quasi-least squares.
Shults, Justine; Guerra, Matthew W
2014-08-30
This note implements an unstructured decaying product matrix via the quasi-least squares approach for estimation of the correlation parameters in the framework of generalized estimating equations. The structure we consider is fairly general without requiring the large number of parameters that are involved in a fully unstructured matrix. It is straightforward to show that the quasi-least squares estimators of the correlation parameters yield feasible values for the unstructured decaying product structure. Furthermore, subject to conditions that are easily checked, the quasi-least squares estimators are valid for longitudinal Bernoulli data. We demonstrate implementation of the structure in a longitudinal clinical trial with both a continuous and binary outcome variable.
A point cloud modeling method based on geometric constraints mixing the robust least squares method
NASA Astrophysics Data System (ADS)
Yue, JIanping; Pan, Yi; Yue, Shun; Liu, Dapeng; Liu, Bin; Huang, Nan
2016-10-01
The appearance of 3D laser scanning technology has provided a new method for the acquisition of spatial 3D information. It has been widely used in the field of Surveying and Mapping Engineering with the characteristics of automatic and high precision. 3D laser scanning data processing process mainly includes the external laser data acquisition, the internal industry laser data splicing, the late 3D modeling and data integration system. For the point cloud modeling, domestic and foreign researchers have done a lot of research. Surface reconstruction technology mainly include the point shape, the triangle model, the triangle Bezier surface model, the rectangular surface model and so on, and the neural network and the Alfa shape are also used in the curved surface reconstruction. But in these methods, it is often focused on single surface fitting, automatic or manual block fitting, which ignores the model's integrity. It leads to a serious problems in the model after stitching, that is, the surfaces fitting separately is often not satisfied with the well-known geometric constraints, such as parallel, vertical, a fixed angle, or a fixed distance. However, the research on the special modeling theory such as the dimension constraint and the position constraint is not used widely. One of the traditional modeling methods adding geometric constraints is a method combing the penalty function method and the Levenberg-Marquardt algorithm (L-M algorithm), whose stability is pretty good. But in the research process, it is found that the method is greatly influenced by the initial value. In this paper, we propose an improved method of point cloud model taking into account the geometric constraint. We first apply robust least-squares to enhance the initial value's accuracy, and then use penalty function method to transform constrained optimization problems into unconstrained optimization problems, and finally solve the problems using the L-M algorithm. The experimental results
Taking correlations in GPS least squares adjustments into account with a diagonal covariance matrix
NASA Astrophysics Data System (ADS)
Kermarrec, Gaël; Schön, Steffen
2016-09-01
Based on the results of Luati and Proietti (Ann Inst Stat Math 63:673-686, 2011) on an equivalence for a certain class of polynomial regressions between the diagonally weighted least squares (DWLS) and the generalized least squares (GLS) estimator, an alternative way to take correlations into account thanks to a diagonal covariance matrix is presented. The equivalent covariance matrix is much easier to compute than a diagonalization of the covariance matrix via eigenvalue decomposition which also implies a change of the least squares equations. This condensed matrix, for use in the least squares adjustment, can be seen as a diagonal or reduced version of the original matrix, its elements being simply the sums of the rows elements of the weighting matrix. The least squares results obtained with the equivalent diagonal matrices and those given by the fully populated covariance matrix are mathematically strictly equivalent for the mean estimator in terms of estimate and its a priori cofactor matrix. It is shown that this equivalence can be empirically extended to further classes of design matrices such as those used in GPS positioning (single point positioning, precise point positioning or relative positioning with double differences). Applying this new model to simulated time series of correlated observations, a significant reduction of the coordinate differences compared with the solutions computed with the commonly used diagonal elevation-dependent model was reached for the GPS relative positioning with double differences, single point positioning as well as precise point positioning cases. The estimate differences between the equivalent and classical model with fully populated covariance matrix were below the mm for all simulated GPS cases and below the sub-mm for the relative positioning with double differences. These results were confirmed by analyzing real data. Consequently, the equivalent diagonal covariance matrices, compared with the often used elevation
L2CXCV: A Fortran 77 package for least squares convex/concave data smoothing
NASA Astrophysics Data System (ADS)
Demetriou, I. C.
2006-04-01
Fortran 77 software is given for least squares smoothing to data values contaminated by random errors subject to one sign change in the second divided differences of the smoothed values, where the location of the sign change is also unknown of the optimization problem. A highly useful description of the constraints is that they follow from the assumption of initially increasing and subsequently decreasing rates of change, or vice versa, of the process considered. The underlying algorithm partitions the data into two disjoint sets of adjacent data and calculates the required fit by solving a strictly convex quadratic programming problem for each set. The piecewise linear interpolant to the fit is convex on the first set and concave on the other one. The partition into suitable sets is achieved by a finite iterative algorithm, which is made quite efficient because of the interactions of the quadratic programming problems on consecutive data. The algorithm obtains the solution by employing no more quadratic programming calculations over subranges of data than twice the number of the divided differences constraints. The quadratic programming technique makes use of active sets and takes advantage of a B-spline representation of the smoothed values that allows some efficient updating procedures. The entire code required to implement the method is 2920 Fortran lines. The package has been tested on a variety of data sets and it has performed very efficiently, terminating in an overall number of active set changes over subranges of data that is only proportional to the number of data. The results suggest that the package can be used for very large numbers of data values. Some examples with output are provided to help new users and exhibit certain features of the software. Important applications of the smoothing technique may be found in calculating a sigmoid approximation, which is a common topic in various contexts in applications in disciplines like physics, economics
Robust analysis of trends in noisy tokamak confinement data using geodesic least squares regression
Verdoolaege, G.; Shabbir, A.; Hornung, G.
2016-11-15
Regression analysis is a very common activity in fusion science for unveiling trends and parametric dependencies, but it can be a difficult matter. We have recently developed the method of geodesic least squares (GLS) regression that is able to handle errors in all variables, is robust against data outliers and uncertainty in the regression model, and can be used with arbitrary distribution models and regression functions. We here report on first results of application of GLS to estimation of the multi-machine scaling law for the energy confinement time in tokamaks, demonstrating improved consistency of the GLS results compared to standard least squares.
Lazarov, R D; Vassilevski, P S
1999-05-06
In this paper we introduce and study a least-squares finite element approximation for singularly perturbed convection-diffusion equations of second order. By introducing the flux (diffusive plus convective) as a new unknown, the problem is written in a mixed form as a first order system. Further, the flux is augmented by adding the lower order terms with a small parameter. The new first order system is approximated by the least-squares finite element method using the minus one norm approach of Bramble, Lazarov, and Pasciak [2]. Further, we estimate the error of the method and discuss its implementation and the numerical solution of some test problems.
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
NASA Technical Reports Server (NTRS)
Shimabukuro, Yosio Edemir; Smith, James A.
1991-01-01
Constrained-least-squares and weighted-least-squares mixing models for generating fraction images derived from remote sensing multispectral data are presented. An experiment considering three components within the pixels-eucalyptus, soil (understory), and shade-was performed. The generated fraction images for shade (shade image) derived from these two methods were compared by considering the performance and computer time. The derived shade images are related to the observed variation in forest structure, i.e., the fraction of inferred shade in the pixel is related to different eucalyptus ages.
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
NASA Astrophysics Data System (ADS)
Ling, Zhao; Yeling, Wang; Guijun, Hu; Yunpeng, Cui; Jian, Shi; Li, Li
2013-07-01
Recursive least squares constant modulus algorithm based on QR decomposition (QR-RLS-CMA) is first proposed as the polarization demultiplexing method. We compare its performance with the stochastic gradient descent constant modulus algorithm (SGD-CMA) and the recursive least squares constant modulus algorithm (RLS-CMA) in a polarization-division-multiplexing system with coherent detection. It is demonstrated that QR-RLS-CMA is an efficient demultiplexing algorithm which can avoid the problem of step-length choice in SGD-CMA. Meanwhile, it also has better symbol error rate (SER) performance and more stable convergence property.
Landsat-4 (TDRSS-user) orbit determination using batch least-squares and sequential methods
NASA Technical Reports Server (NTRS)
Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.
1992-01-01
TDRSS user orbit determination is analyzed using a batch least-squares method and a sequential estimation method. It was found that in the batch least-squares method analysis, the orbit determination consistency for Landsat-4, which was heavily tracked by TDRSS during January 1991, was about 4 meters in the rms overlap comparisons and about 6 meters in the maximum position differences in overlap comparisons. The consistency was about 10 to 30 meters in the 3 sigma state error covariance function in the sequential method analysis. As a measure of consistency, the first residual of each pass was within the 3 sigma bound in the residual space.
Least square neural network model of the crude oil blending process.
Rubio, José de Jesús
2016-06-01
In this paper, the recursive least square algorithm is designed for the big data learning of a feedforward neural network. The proposed method as the combination of the recursive least square and feedforward neural network obtains four advantages over the alone algorithms: it requires less number of regressors, it is fast, it has the learning ability, and it is more compact. Stability, convergence, boundedness of parameters, and local minimum avoidance of the proposed technique are guaranteed. The introduced strategy is applied for the modeling of the crude oil blending process.
Landsat-4 (TDRSS-user) orbit determination using batch least-squares and sequential methods
NASA Technical Reports Server (NTRS)
Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.
1992-01-01
TDRSS user orbit determination is analyzed using a batch least-squares method and a sequential estimation method. It was found that in the batch least-squares method analysis, the orbit determination consistency for Landsat-4, which was heavily tracked by TDRSS during January 1991, was about 4 meters in the rms overlap comparisons and about 6 meters in the maximum position differences in overlap comparisons. The consistency was about 10 to 30 meters in the 3 sigma state error covariance function in the sequential method analysis. As a measure of consistency, the first residual of each pass was within the 3 sigma bound in the residual space.
First-Order System Least-Squares for the Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Bochev, P.; Cai, Z.; Manteuffel, T. A.; McCormick, S. F.
1996-01-01
This paper develops a least-squares approach to the solution of the incompressible Navier-Stokes equations in primitive variables. As with our earlier work on Stokes equations, we recast the Navier-Stokes equations as a first-order system by introducing a velocity flux variable and associated curl and trace equations. We show that the resulting system is well-posed, and that an associated least-squares principle yields optimal discretization error estimates in the H(sup 1) norm in each variable (including the velocity flux) and optimal multigrid convergence estimates for the resulting algebraic system.
NASA Technical Reports Server (NTRS)
Shimabukuro, Yosio Edemir; Smith, James A.
1991-01-01
Constrained-least-squares and weighted-least-squares mixing models for generating fraction images derived from remote sensing multispectral data are presented. An experiment considering three components within the pixels-eucalyptus, soil (understory), and shade-was performed. The generated fraction images for shade (shade image) derived from these two methods were compared by considering the performance and computer time. The derived shade images are related to the observed variation in forest structure, i.e., the fraction of inferred shade in the pixel is related to different eucalyptus ages.
Prediction model of sinoatrial node field potential using high order partial least squares.
Feng, Yu; Cao, Hui; Zhang, Yanbin
2015-01-01
High order partial least squares (HOPLS) is a novel data processing method. It is highly suitable for building prediction model which has tensor input and output. The objective of this study is to build a prediction model of the relationship between sinoatrial node field potential and high glucose using HOPLS. The three sub-signals of the sinoatrial node field potential made up the model's input. The concentration and the actuation duration of high glucose made up the model's output. The results showed that on the premise of predicting two dimensional variables, HOPLS had the same predictive ability and a lower dispersion degree compared with partial least squares (PLS).
ERIC Educational Resources Information Center
Knol, Dirk L.; ten Berge, Jos M. F.
An algorithm is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. The proposed algorithm is based on a solution for C. I. Mosier's oblique Procrustes rotation problem offered by J. M. F. ten Berge and K. Nevels (1977). It is shown that the minimization problem…
NASA Astrophysics Data System (ADS)
Crawford, Jagoda; Hughes, Catherine E.; Lykoudis, Spyros
2014-11-01
The relationship between δ2H and δ18O in precipitation at a site, known as the local meteoric water line (LMWL), is normally defined using an ordinary least squares regression (OLSR). However, it has been argued that this form of minimisation is more appropriate when a predictive model is being developed (and there are no measurement errors associated with the independent variable) and that orthogonal regression, also known as major axis regression (MA), or reduced major axis regression (RMA) may be better suited when a relationship is being sought between two variables which are related by underlying physical processes. The slope of the LMWLs for the GNIP data is examined using the three linear regressions, and the corresponding precipitation weighted regressions. The MA and RMA regressions generally produced larger slopes, with the largest differences for oceanic islands and coastal sites. The difference between the various methods was the least for continental sites. In all considered cases, both for the standard and precipitation weighted regressions, the slope produced by RMA was in between those determined by OLSR and MA, with OLSR producing the smaller slope. Further, the results of both RMA and precipitation weighted RMA were less sensitive to the removal of outliers and values with high leverage statistic. The results indicate that when a good linear relationship exists between δ2H and δ18O, all considered regressions result in a close fit. When the values are distributed within circles or ellipses on the δ18O-δ2H bivariate plot, as would appear in coastal and oceanic sites from first stage rainout, care needs to be taken as to which regression is utilised. However, in some of these cases, it appears the precipitation weighted MA (and in some cases MA) produces large slopes. In these cases the average Root Mean Sum of Squared Error (rmSSEav) value of the fit can be used as a guide of the suitability of the MA and PWMA for each site. Where the slope
Recovery of Weak Common Factors by Maximum Likelihood and Ordinary Least Squares Estimation.
ERIC Educational Resources Information Center
Briggs, Nancy E.; MacCallum, Robert C.
2003-01-01
Examined the relative performance of two commonly used methods of parameter estimation in factor analysis, maximum likelihood (ML) and ordinary least squares (OLS) through simulation. In situations with a moderate amount of error, ML often failed to recover the weak factor while OLS succeeded. Also presented an example using empirical data. (SLD)
The Use of Orthogonal Distances in Generating the Total Least Squares Estimate
ERIC Educational Resources Information Center
Glaister, P.
2005-01-01
The method of least squares enables the determination of an estimate of the slope and intercept of a straight line relationship between two quantities or variables X and Y. Although a theoretical relationship may exist between X and Y of the form Y = mX + c, in practice experimental or measurement errors will occur, and the observed or measured…
Beaton, Derek; Dunlop, Joseph; Abdi, Hervé
2016-12-01
For nearly a century, detecting the genetic contributions to cognitive and behavioral phenomena has been a core interest for psychological research. Recently, this interest has been reinvigorated by the availability of genotyping technologies (e.g., microarrays) that provide new genetic data, such as single nucleotide polymorphisms (SNPs). These SNPs-which represent pairs of nucleotide letters (e.g., AA, AG, or GG) found at specific positions on human chromosomes-are best considered as categorical variables, but this coding scheme can make difficult the multivariate analysis of their relationships with behavioral measurements, because most multivariate techniques developed for the analysis between sets of variables are designed for quantitative variables. To palliate this problem, we present a generalization of partial least squares-a technique used to extract the information common to 2 different data tables measured on the same observations-called partial least squares correspondence analysis-that is specifically tailored for the analysis of categorical and mixed ("heterogeneous") data types. Here, we formally define and illustrate-in a tutorial format-how partial least squares correspondence analysis extends to various types of data and design problems that are particularly relevant for psychological research that include genetic data. We illustrate partial least squares correspondence analysis with genetic, behavioral, and neuroimaging data from the Alzheimer's Disease Neuroimaging Initiative. R code is available on the Comprehensive R Archive Network and via the authors' websites. (PsycINFO Database Record
NASA Astrophysics Data System (ADS)
Guo, Shiguang; Zhang, Bo; Wang, Qing; Cabrales-Vargas, Alejandro; Marfurt, Kurt J.
2016-08-01
Conventional Kirchhoff migration often suffers from artifacts such as aliasing and acquisition footprint, which come from sub-optimal seismic acquisition. The footprint can mask faults and fractures, while aliased noise can focus into false coherent events which affect interpretation and contaminate amplitude variation with offset, amplitude variation with azimuth and elastic inversion. Preconditioned least-squares migration minimizes these artifacts. We implement least-squares migration by minimizing the difference between the original data and the modeled demigrated data using an iterative conjugate gradient scheme. Unpreconditioned least-squares migration better estimates the subsurface amplitude, but does not suppress aliasing. In this work, we precondition the results by applying a 3D prestack structure-oriented LUM (lower-upper-middle) filter to each common offset and common azimuth gather at each iteration. The preconditioning algorithm not only suppresses aliasing of both signal and noise, but also improves the convergence rate. We apply the new preconditioned least-squares migration to the Marmousi model and demonstrate how it can improve the seismic image compared with conventional migration, and then apply it to one survey acquired over a new resource play in the Mid-Continent, USA. The acquisition footprint from the targets is attenuated and the signal to noise ratio is enhanced. To demonstrate the impact on interpretation, we generate a suite of seismic attributes to image the Mississippian limestone, and show that the karst-enhanced fractures in the Mississippian limestone can be better illuminated.
Revisiting the Least-squares Procedure for Gradient Reconstruction on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Thomas, James L. (Technical Monitor)
2003-01-01
The accuracy of the least-squares technique for gradient reconstruction on unstructured meshes is examined. While least-squares techniques produce accurate results on arbitrary isotropic unstructured meshes, serious difficulties exist for highly stretched meshes in the presence of surface curvature. In these situations, gradients are typically under-estimated by up to an order of magnitude. For vertex-based discretizations on triangular and quadrilateral meshes, and cell-centered discretizations on quadrilateral meshes, accuracy can be recovered using an inverse distance weighting in the least-squares construction. For cell-centered discretizations on triangles, both the unweighted and weighted least-squares constructions fail to provide suitable gradient estimates for highly stretched curved meshes. Good overall flow solution accuracy can be retained in spite of poor gradient estimates, due to the presence of flow alignment in exactly the same regions where the poor gradient accuracy is observed. However, the use of entropy fixes has the potential for generating large but subtle discretization errors.
ERIC Educational Resources Information Center
Rocconi, Louis M.
2013-01-01
This study examined the differing conclusions one may come to depending upon the type of analysis chosen, hierarchical linear modeling or ordinary least squares (OLS) regression. To illustrate this point, this study examined the influences of seniors' self-reported critical thinking abilities three ways: (1) an OLS regression with the student…
A negative-norm least squares method for Reissner-Mindlin plates
NASA Astrophysics Data System (ADS)
Bramble, J. H.; Sun, T.
1998-07-01
In this paper a least squares method, using the minus one norm developed by Bramble, Lazarov, and Pasciak, is introduced to approximate the solution of the Reissner-Mindlin plate problem with small parameter t, the thickness of the plate. The reformulation of Brezzi and Fortin is employed to prevent locking. Taking advantage of the least squares approach, we use only continuous finite elements for all the unknowns. In particular, we may use continuous linear finite elements. The difficulty of satisfying the inf-sup condition is overcome by the introduction of a stabilization term into the least squares bilinear form, which is very cheap computationally. It is proved that the error of the discrete solution is optimal with respect to regularity and uniform with respect to the parameter t. Apart from the simplicity of the elements, the stability theorem gives a natural block diagonal preconditioner of the resulting least squares system. For each diagonal block, one only needs a preconditioner for a second order elliptic problem.
Superresolution of 3-D computational integral imaging based on moving least square method.
Kim, Hyein; Lee, Sukho; Ryu, Taekyung; Yoon, Jungho
2014-11-17
In this paper, we propose an edge directive moving least square (ED-MLS) based superresolution method for computational integral imaging reconstruction(CIIR). Due to the low resolution of the elemental images and the alignment error of the microlenses, it is not easy to obtain an accurate registration result in integral imaging, which makes it difficult to apply superresolution to the CIIR application. To overcome this problem, we propose the edge directive moving least square (ED-MLS) based superresolution method which utilizes the properties of the moving least square. The proposed ED-MLS based superresolution takes the direction of the edge into account in the moving least square reconstruction to deal with the abrupt brightness changes in the edge regions, and is less sensitive to the registration error. Furthermore, we propose a framework which shows how the data have to be collected for the superresolution problem in the CIIR application. Experimental results verify that the resolution of the elemental images is enhanced, and that a high resolution reconstructed 3-D image can be obtained with the proposed method.
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong
2010-01-01
This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…
Using Technology to Optimize and Generalize: The Least-Squares Line
ERIC Educational Resources Information Center
Burke, Maurice J.; Hodgson, Ted R.
2007-01-01
With the help of technology and a basic high school algebra method for finding the vertex of a quadratic polynomial, students can develop and prove the formula for least-squares lines. Students are exposed to the power of a computer algebra system to generalize processes they understand and to see deeper patterns in those processes. (Contains 4…
Application of the Least Squares Method for Determining Magnetic Compass Deviation
NASA Astrophysics Data System (ADS)
Felski, Andrzej
This paper describes an algorithm for evaluation of magnetic compass deviation based on the least squares method. An automatic system was built, coupled with a magnetic compass recording device, for fine estimation of the deviation curve on the basis of an incomplete compass swing circulation.
ERIC Educational Resources Information Center
And Others; Young, Forrest W.
1976-01-01
A method is discussed which extends canonical regression analysis to the situation where the variables may be measured as nominal, ordinal, or interval, and where they may be either continuous or discrete. The method, which is purely descriptive, uses an alternating least squares algorithm and is robust. Examples are provided. (Author/JKS)
Assessing Compliance-Effect Bias in the Two Stage Least Squares Estimator
ERIC Educational Resources Information Center
Reardon, Sean; Unlu, Fatih; Zhu, Pei; Bloom, Howard
2011-01-01
The proposed paper studies the bias in the two-stage least squares, or 2SLS, estimator that is caused by the compliance-effect covariance (hereafter, the compliance-effect bias). It starts by deriving the formula for the bias in an infinite sample (i.e., in the absence of finite sample bias) under different circumstances. Specifically, it…
In this article, we consider the least-squares approach for estimating parameters of a spatial variogram and establish consistency and asymptotic normality of these estimators under general conditions. Large-sample distributions are also established under a sp...
Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Bentler, Peter M.
2000-01-01
Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)
In this article, we consider the least-squares approach for estimating parameters of a spatial variogram and establish consistency and asymptotic normality of these estimators under general conditions. Large-sample distributions are also established under a sp...
Least-Squares Approaches for the Time-Dependent Maxwell Equations
Zhiquiang, C; Jones, J
2001-12-01
When the author was at CASC in LLNL during the period between July and December of last year, he was working on two research topics: (1) least-squares approaches for elasticity and Maxwell equations and (2) high-accuracy approximations for non-smooth problems.
A Geometric Analysis of when Fixed Weighting Schemes Will Outperform Ordinary Least Squares
ERIC Educational Resources Information Center
Davis-Stober, Clintin P.
2011-01-01
Many researchers have demonstrated that fixed, exogenously chosen weights can be useful alternatives to Ordinary Least Squares (OLS) estimation within the linear model (e.g., Dawes, Am. Psychol. 34:571-582, 1979; Einhorn & Hogarth, Org. Behav. Human Perform. 13:171-192, 1975; Wainer, Psychol. Bull. 83:213-217, 1976). Generalizing the approach of…
A theorem of equivalence on the methods of least-squares estimation
NASA Technical Reports Server (NTRS)
Wu, S.-C.
1985-01-01
A theorem on the methods of least-squares estimation is stated and proved. This theorem enables the replacement of a system of correlated measurements by an equivalent system of uncorrelated measurements without a whitening process, thus simplifying the analysis while resulting in the same minimum-variance estimate.
A Geometric Analysis of when Fixed Weighting Schemes Will Outperform Ordinary Least Squares
ERIC Educational Resources Information Center
Davis-Stober, Clintin P.
2011-01-01
Many researchers have demonstrated that fixed, exogenously chosen weights can be useful alternatives to Ordinary Least Squares (OLS) estimation within the linear model (e.g., Dawes, Am. Psychol. 34:571-582, 1979; Einhorn & Hogarth, Org. Behav. Human Perform. 13:171-192, 1975; Wainer, Psychol. Bull. 83:213-217, 1976). Generalizing the approach of…
ERIC Educational Resources Information Center
Huang, Jie-Tsuen; Hsieh, Hui-Hsien
2011-01-01
The purpose of this study was to investigate the contributions of socioeconomic status (SES) in predicting social cognitive career theory (SCCT) factors. Data were collected from 738 college students in Taiwan. The results of the partial least squares (PLS) analyses indicated that SES significantly predicted career decision self-efficacy (CDSE);…
ERIC Educational Resources Information Center
Wu, Chia-Huei; Chen, Lung Hung; Tsai, Ying-Mei
2009-01-01
This study introduced a formative model to investigate the utility of importance weighting on satisfaction scores with partial least squares analysis. Based on the bottom-up theory of satisfaction evaluations, the measurement structure for weighted/unweighted domain satisfaction scores was modeled as a formative model, whereas the measurement…
The Use of Orthogonal Distances in Generating the Total Least Squares Estimate
ERIC Educational Resources Information Center
Glaister, P.
2005-01-01
The method of least squares enables the determination of an estimate of the slope and intercept of a straight line relationship between two quantities or variables X and Y. Although a theoretical relationship may exist between X and Y of the form Y = mX + c, in practice experimental or measurement errors will occur, and the observed or measured…
Using Technology to Optimize and Generalize: The Least-Squares Line
ERIC Educational Resources Information Center
Burke, Maurice J.; Hodgson, Ted R.
2007-01-01
With the help of technology and a basic high school algebra method for finding the vertex of a quadratic polynomial, students can develop and prove the formula for least-squares lines. Students are exposed to the power of a computer algebra system to generalize processes they understand and to see deeper patterns in those processes. (Contains 4…
Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei
2016-03-01
We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods.
An Extension of Least Squares Estimation of IRT Linking Coefficients for the Graded Response Model
ERIC Educational Resources Information Center
Kim, Seonghoon
2010-01-01
The three types (generalized, unweighted, and weighted) of least squares methods, proposed by Ogasawara, for estimating item response theory (IRT) linking coefficients under dichotomous models are extended to the graded response model. A simulation study was conducted to confirm the accuracy of the extended formulas, and a real data study was…
Conjunctive and Disjunctive Extensions of the Least Squares Distance Model of Cognitive Diagnosis
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.; Atanasov, Dimitar V.
2012-01-01
Many models of cognitive diagnosis, including the "least squares distance model" (LSDM), work under the "conjunctive" assumption that a correct item response occurs when all latent attributes required by the item are correctly performed. This article proposes a "disjunctive" version of the LSDM under which the correct item response occurs when "at…
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong
2010-01-01
This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…
A Comparison of Mean Phase Difference and Generalized Least Squares for Analyzing Single-Case Data
ERIC Educational Resources Information Center
Manolov, Rumen; Solanas, Antonio
2013-01-01
The present study focuses on single-case data analysis specifically on two procedures for quantifying differences between baseline and treatment measurements. The first technique tested is based on generalized least square regression analysis and is compared to a proposed non-regression technique, which allows obtaining similar information. The…
Interpreting the Results of Weighted Least-Squares Regression: Caveats for the Statistical Consumer.
ERIC Educational Resources Information Center
Willett, John B.; Singer, Judith D.
In research, data sets often occur in which the variance of the distribution of the dependent variable at given levels of the predictors is a function of the values of the predictors. In this situation, the use of weighted least-squares (WLS) or techniques is required. Weights suitable for use in a WLS regression analysis must be estimated. A…
ERIC Educational Resources Information Center
Rocconi, Louis M.
2013-01-01
This study examined the differing conclusions one may come to depending upon the type of analysis chosen, hierarchical linear modeling or ordinary least squares (OLS) regression. To illustrate this point, this study examined the influences of seniors' self-reported critical thinking abilities three ways: (1) an OLS regression with the student…
Risk Bounds for Regularized Least-Squares Algorithm with Operator-Value Kernels
2005-05-16
for regularized least-squares algorithm with operator-valued kernels Ernesto De Vito a Andrea Caponnetto b aDipartimento di Matematica , Università...0915, National Science Foundation (ITR/SYS) Contract No. IIS - 0112991, National Science Foundation (ITR) Contract No. IIS -0209289, National Science
Wavelet-generalized least squares: a new BLU estimator of linear regression models with 1/f errors.
Fadili, M J; Bullmore, E T
2002-01-01
Long-memory noise is common to many areas of signal processing and can seriously confound estimation of linear regression model parameters and their standard errors. Classical autoregressive moving average (ARMA) methods can adequately address the problem of linear time invariant, short-memory errors but may be inefficient and/or insufficient to secure type 1 error control in the context of fractal or scale invariant noise with a more slowly decaying autocorrelation function. Here we introduce a novel method, called wavelet-generalized least squares (WLS), which is (to a good approximation) the best linear unbiased (BLU) estimator of regression model parameters in the context of long-memory errors. The method also provides maximum likelihood (ML) estimates of the Hurst exponent (which can be readily translated to the fractal dimension or spectral exponent) characterizing the correlational structure of the errors, and the error variance. The algorithm exploits the whitening or Karhunen-Loéve-type property of the discrete wavelet transform to diagonalize the covariance matrix of the errors generated by an iterative fitting procedure after both data and design matrix have been transformed to the wavelet domain. Properties of this estimator, including its Cramèr-Rao bounds, are derived theoretically and compared to its empirical performance on a range of simulated data. Compared to ordinary least squares and ARMA-based estimators, WLS is shown to be more efficient and to give excellent type 1 error control. The method is also applied to some real (neurophysiological) data acquired by functional magnetic resonance imaging (fMRI) of the human brain. We conclude that wavelet-generalized least squares may be a generally useful estimator of regression models in data complicated by long-memory or fractal noise.
Cross-term free based bistatic radar system using sparse least squares
NASA Astrophysics Data System (ADS)
Sevimli, R. Akin; Cetin, A. Enis
2015-05-01
Passive Bistatic Radar (PBR) systems use illuminators of opportunity, such as FM, TV, and DAB broadcasts. The most common illuminator of opportunity used in PBR systems is the FM radio stations. Single FM channel based PBR systems do not have high range resolution and may turn out to be noisy. In order to enhance the range resolution of the PBR systems algorithms using several FM channels at the same time are proposed. In standard methods, consecutive FM channels are translated to baseband as is and fed to the matched filter to compute the range-Doppler map. Multichannel FM based PBR systems have better range resolution than single channel systems. However superious sidelobe peaks occur as a side effect. In this article, we linearly predict the surveillance signal using the modulated and delayed reference signal components. We vary the modulation frequency and the delay to cover the entire range-Doppler plane. Whenever there is a target at a specific range value and Doppler value the prediction error is minimized. The cost function of the linear prediction equation has three components. The first term is the real-part of the ordinary least squares term, the second-term is the imaginary part of the least squares and the third component is the l2-norm of the prediction coefficients. Separate minimization of real and imaginary parts reduces the side lobes and decrease the noise level of the range-Doppler map. The third term enforces the sparse solution on the least squares problem. We experimentally observed that this approach is better than both the standard least squares and other sparse least squares approaches in terms of side lobes. Extensive simulation examples will be presented in the final form of the paper.
NASA Astrophysics Data System (ADS)
Grigorie, Teodor Lucian; Corcau, Ileana Jenica; Tudosie, Alexandru Nicolae
2017-06-01
The paper presents a way to obtain an intelligent miniaturized three-axial accelerometric sensor, based on the on-line estimation and compensation of the sensor errors generated by the environmental temperature variation. Taking into account that this error's value is a strongly nonlinear complex function of the values of environmental temperature and of the acceleration exciting the sensor, its correction may not be done off-line and it requires the presence of an additional temperature sensor. The proposed identification methodology for the error model is based on the least square method which process off-line the numerical values obtained from the accelerometer experimental testing for different values of acceleration applied to its axes of sensitivity and for different values of operating temperature. A final analysis of the error level after the compensation highlights the best variant for the matrix in the error model. In the sections of the paper are shown the results of the experimental testing of the accelerometer on all the three sensitivity axes, the identification of the error models on each axis by using the least square method, and the validation of the obtained models with experimental values. For all of the three detection channels was obtained a reduction by almost two orders of magnitude of the acceleration absolute maximum error due to environmental temperature variation.
Bao, Yidan; Kong, Wenwen; Liu, Fei; Qiu, Zhengjun; He, Yong
2012-10-31
Amino acids are quite important indices to indicate the growth status of oilseed rape under herbicide stress. Near infrared (NIR) spectroscopy combined with chemometrics was applied for fast determination of glutamic acid in oilseed rape leaves. The optimal spectral preprocessing method was obtained after comparing Savitzky-Golay smoothing, standard normal variate, multiplicative scatter correction, first and second derivatives, detrending and direct orthogonal signal correction. Linear and nonlinear calibration methods were developed, including partial least squares (PLS) and least squares-support vector machine (LS-SVM). The most effective wavelengths (EWs) were determined by the successive projections algorithm (SPA), and these wavelengths were used as the inputs of PLS and LS-SVM model. The best prediction results were achieved by SPA-LS-SVM (Raw) model with correlation coefficient r = 0.9943 and root mean squares error of prediction (RMSEP) = 0.0569 for prediction set. These results indicated that NIR spectroscopy combined with SPA-LS-SVM was feasible for the fast and effective detection of glutamic acid in oilseed rape leaves. The selected EWs could be used to develop spectral sensors, and the important and basic amino acid data were helpful to study the function mechanism of herbicide.
Zhan, Xiaobin; Jiang, Shulan; Yang, Yili; Liang, Jian; Shi, Tielin; Li, Xiwen
2015-01-01
This paper proposes an ultrasonic measurement system based on least squares support vector machines (LS-SVM) for inline measurement of particle concentrations in multicomponent suspensions. Firstly, the ultrasonic signals are analyzed and processed, and the optimal feature subset that contributes to the best model performance is selected based on the importance of features. Secondly, the LS-SVM model is tuned, trained and tested with different feature subsets to obtain the optimal model. In addition, a comparison is made between the partial least square (PLS) model and the LS-SVM model. Finally, the optimal LS-SVM model with the optimal feature subset is applied to inline measurement of particle concentrations in the mixing process. The results show that the proposed method is reliable and accurate for inline measuring the particle concentrations in multicomponent suspensions and the measurement accuracy is sufficiently high for industrial application. Furthermore, the proposed method is applicable to the modeling of the nonlinear system dynamically and provides a feasible way to monitor industrial processes. PMID:26393611
NASA Astrophysics Data System (ADS)
Pham, Phong; Guo, Yin
2013-04-01
The interpolating moving least-squares (IMLS) approach for constructing potential energy surfaces has been developed and employed in standard classical trajectory simulations in the past few years. We extend the approach to the tunneling regime by combining the IMLS fitting method and the semiclassical scheme that incorporates tunneling into classical trajectory calculations. Dynamics of cis-trans isomerization in nitrous acid (HONO) is studied as a test case to investigate various aspects of the approach such as the strategy for growing the surface, the basis set employed, the scaling of the IMLS fits, and the accuracy of the surface required for obtaining converged rate coefficients. The validity of the approach is demonstrated through comparison with other semiclassical and quantum mechanical studies on HONO.
Power-law modeling based on least-squares criteria: consequences for system analysis and simulation.
Hernández-Bermejo, B; Fairén, V; Sorribas, A
2000-10-01
The power-law formalism was initially derived as a Taylor series approximation in logarithmic space for kinetic rate-laws. The resulting models, either as generalized mass action (GMA) or as S-systems models, allow to characterize the target system and to simulate its dynamical behavior in response to external perturbations and parameter changes. This approach has been succesfully used as a modeling tool in many applications from cell metabolism to population dynamics. Without leaving the general formalism, we recently proposed to derive the power-law representation in an alternative way that uses least-squares (LS) minimization instead of the traditional derivation based on Taylor series [B. Hernández-Bermejo, V. Fairén, A. Sorribas, Math. Biosci. 161 (1999) 83-94]. It was shown that the resulting LS power-law mimics the target rate-law in a wider range of concentration values than the classical power-law, and that the prediction of the steady-state using the LS power-law is closer to the actual steady-state of the target system. However, many implications of this alternative approach remained to be established. We explore some of them in the present work. Firstly, we extend the definition of the LS power-law within a given operating interval in such a way that no preferred operating point is selected. Besides providing an alternative to the classical Taylor power-law, that can be considered a particular case when the operating interval is reduced to a single point, the LS power-law so defined is consistent with the results that can be obtained by fitting experimental data points. Secondly, we show that the LS approach leads to a system description, either as an S-system or a GMA model, in which the systemic properties (such as the steady-state prediction or the log-gains) appear averaged over the corresponding interval when compared with the properties that can be computed from Taylor-derived models in different operating points within the considered operating
Kochiya, Yuko; Hirabayashi, Akari; Ichimaru, Yuhei
2017-09-16
To evaluate the dynamic nature of nocturnal heart rate variability, RR intervals recorded with a wearable heart rate sensor were analyzed using the Least Square Cosine Spectrum Method. Six 1-year-old infants participated in the study. A wearable heart rate sensor was placed on their chest to measure RR intervals and 3-axis acceleration. Heartbeat time series were analyzed for every 30 s using the Least Square Cosine Spectrum Method, and an original parameter to quantify the regularity of respiratory-related heart rate rhythm was extracted and referred to as "RA (RA-COSPEC: Respiratory Area obtained by COSPEC)." The RA value is higher when a cosine curve is fitted to the original data series. The time sequential changes of RA showed cyclic changes with significant rhythm during the night. The mean cycle length of RA was 70 ± 15 min, which is shorter than young adult's cycle in our previous study. At the threshold level of RA greater than 3, the HR was significantly decreased compared with the RA value less than 3. The regularity of heart rate rhythm showed dynamic changes during the night in 1-year-old infants. Significant decrease of HR at the time of higher RA suggests the increase of parasympathetic activity. We suspect that the higher RA reflects the regular respiratory pattern during the night. This analysis system may be useful for quantitative assessment of regularity and dynamic changes of nocturnal heart rate variability in infants.
Application of nonlinear least squares methods to the analysis of solar spectra
NASA Technical Reports Server (NTRS)
Shaw, J. H.
1985-01-01
A fast method of retrieving vertical temperature profiles in the atmosphere and of determining the paths of the rays producing the ATMOS occultation spectra has been developed. The results from one set of occultation data appear to be consistent with other available data. A study of sources of error, a search for other suitable features for measurement in the spectra, and modification of the program to obtain mixing ratio profiles have been initiated.
Model updating of rotor systems by using Nonlinear least square optimization
NASA Astrophysics Data System (ADS)
Jha, A. K.; Dewangan, P.; Sarangi, M.
2016-07-01
Mathematical models of structure or machineries are always different from the existing physical system, because the approach of numerical predictions to the behavior of a physical system is limited by the assumptions used in the development of the mathematical model. Model updating is, therefore necessary so that updated model should replicate the physical system. This work focuses on the model updating of rotor systems at various speeds as well as at different modes of vibration. Support bearing characteristics severely influence the dynamics of rotor systems like turbines, compressors, pumps, electrical machines, machine tool spindles etc. Therefore bearing parameters (stiffness and damping) are considered to be updating parameters. A finite element model of rotor systems is developed using Timoshenko beam element. Unbalance response in time domain and frequency response function have been calculated by numerical techniques, and compared with the experimental data to update the FE-model of rotor systems. An algorithm, based on unbalance response in time domain is proposed for updating the rotor systems at different running speeds of rotor. An attempt has been made to define Unbalance response assurance criterion (URAC) to check the degree of correlation between updated FE model and physical model.