#### Sample records for nonlinear least-squares fitting

1. On Least Squares Fitting Nonlinear Submodels.

ERIC Educational Resources Information Center

Bechtel, Gordon G.

Three simplifying conditions are given for obtaining least squares (LS) estimates for a nonlinear submodel of a linear model. If these are satisfied, and if the subset of nonlinear parameters may be LS fit to the corresponding LS estimates of the linear model, then one attains the desired LS estimates for the entire submodel. Two illustrative…

2. AKLSQF - LEAST SQUARES CURVE FITTING

NASA Technical Reports Server (NTRS)

Kantak, A. V.

1994-01-01

The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

3. Weighted Least Squares Fitting Using Ordinary Least Squares Algorithms.

ERIC Educational Resources Information Center

Kiers, Henk A. L.

1997-01-01

A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. The approach consists of iteratively performing steps of existing algorithms for ordinary least squares fitting of the same model and is based on maximizing a function that majorizes WLS loss function. (Author/SLD)

4. Deming's General Least Square Fitting

Energy Science and Technology Software Center (ESTSC)

1992-02-18

DEM4-26 is a generalized least square fitting program based on Deming''s method. Functions built into the program for fitting include linear, quadratic, cubic, power, Howard''s, exponential, and Gaussian; others can easily be added. The program has the following capabilities: (1) entry, editing, and saving of data; (2) fitting of any of the built-in functions or of a user-supplied function; (3) plotting the data and fitted function on the display screen, with error limits if requested,more » and with the option of copying the plot to the printer; (4) interpolation of x or y values from the fitted curve with error estimates based on error limits selected by the user; and (5) plotting the residuals between the y data values and the fitted curve, with the option of copying the plot to the printer. If the plot is to be copied to a printer, GRAPHICS should be called from the operating system disk before the BASIC interpreter is loaded.« less

5. Nonlinear least squares and regularization

SciTech Connect

Berryman, J.G.

1996-04-01

A problem frequently encountered in the earth sciences requires deducing physical parameters of the system of interest from measurements of some other (hopefully) closely related physical quantity. The obvious example in seismology (either surface reflection seismology or crosswell seismic tomography) is the use of measurements of sound wave traveltime to deduce wavespeed distribution in the earth and then subsequently to infer the values of other physical quantities of interest such as porosity, water or oil saturation, permeability, etc. The author presents and discusses some general ideas about iterative nonlinear output least-squares methods. The main result is that, if it is possible to do forward modeling on a physical problem in a way that permits the output (i.e., the predicted values of some physical parameter that could be measured) and the first derivative of the same output with respect to the model parameters (whatever they may be) to be calculated numerically, then it is possible (at least in principle) to solve the inverse problem using the method described. The main trick learned in this analysis comes from the realization that the steps in the model updates may have to be quite small in some cases for the implied guarantees of convergence to be realized.

6. Lmfit: Non-Linear Least-Square Minimization and Curve-Fitting for Python

Newville, Matthew; Stensitzki, Till; Allen, Daniel B.; Rawlik, Michal; Ingargiola, Antonino; Nelson, Andrew

2016-06-01

Lmfit provides a high-level interface to non-linear optimization and curve fitting problems for Python. Lmfit builds on and extends many of the optimization algorithm of scipy.optimize, especially the Levenberg-Marquardt method from optimize.leastsq. Its enhancements to optimization and data fitting problems include using Parameter objects instead of plain floats as variables, the ability to easily change fitting algorithms, and improved estimation of confidence intervals and curve-fitting with the Model class. Lmfit includes many pre-built models for common lineshapes.

7. Estimating errors in least-squares fitting

NASA Technical Reports Server (NTRS)

Richter, P. H.

1995-01-01

While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

8. BLS: Box-fitting Least Squares

Kovács, G.; Zucker, S.; Mazeh, T.

2016-07-01

BLS (Box-fitting Least Squares) is a box-fitting algorithm that analyzes stellar photometric time series to search for periodic transits of extrasolar planets. It searches for signals characterized by a periodic alternation between two discrete levels, with much less time spent at the lower level.

9. Quantitative Evaluation of Cross-Peak Volumes in Multidimensional Spectra by Nonlinear-Least-Squares Curve Fitting

Sze, K. H.; Barsukov, I. L.; Roberts, G. C. K.

A procedure for quantitative evaluation of cross-peak volumes in spectra of any order of dimensions is described; this is based on a generalized algorithm for combining appropriate one-dimensional integrals obtained by nonlinear-least-squares curve-fitting techniques. This procedure is embodied in a program, NDVOL, which has three modes of operation: a fully automatic mode, a manual mode for interactive selection of fitting parameters, and a fast reintegration mode. The procedures used in the NDVOL program to obtain accurate volumes for overlapping cross peaks are illustrated using various simulated overlapping cross-peak patterns. The precision and accuracy of the estimates of cross-peak volumes obtained by application of the program to these simulated cross peaks and to a back-calculated 2D NOESY spectrum of dihydrofolate reductase are presented. Examples are shown of the use of the program with real 2D and 3D data. It is shown that the program is able to provide excellent estimates of volume even for seriously overlapping cross peaks with minimal intervention by the user.

10. The 4850 cm^{-1} Spectral Region of CO_2: Constrained Multispectrum Nonlinear Least Squares Fitting Including Line Mixing, Speed Dependent Line Profiles and Fermi Resonance

Benner, D. Chris; Devi, V. Malathy; Nugent, Emily; Brown, Linda R.; Miller, Charles E.; Toth, Robert A.; Sung, Keeyoon

2009-06-01

Room temperature spectra of carbon dioxide were obtained with the Fourier transform spectrometers at the National Solar Observatory's McMath-Pierce telescope and at the Jet Propulsion Laboratory. The multispectrum nonlinear least squares fitting technique is being used to derive accurate spectral line parameters for the strongest CO_2 bands in the 4700-4930 cm^{-1} spectral region. Positions of the spectral lines were constrained to their quantum mechanical relationships, and the rovibrational constants were derived directly from the fit. Similarly, the intensities of the lines within each of the rovibrational bands were constrained to their quantum mechanical relationships, and the band strength and Herman-Wallis coefficients were derived directly from the fit. These constraints even include a pair of interacting bands with the interaction coefficient derived directly using both the positions and intensities of the spectral lines. Room temperature self and air Lorentz halfwidth and pressure induced line shift coefficients are measured for most lines. Constraints upon the positions improve measurement of pressure-induced shifts, and constraints on the intensities improve the measurement of the Lorentz halfwidths. Line mixing and speed dependent line shapes are also required and characterized. D. Chris Benner, C.P. Rinsland, V. Malathy Devi, M.A.H. Smith, and D. Atkins, J. Quant. Spectrosc. Radiat. Transfer 53, 705-721 (1995)

11. A Genetic Algorithm Approach to Nonlinear Least Squares Estimation

ERIC Educational Resources Information Center

Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.

2004-01-01

A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…

12. Multisplitting for linear, least squares and nonlinear problems

SciTech Connect

Renaut, R.

1996-12-31

In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.

13. Faraday rotation data analysis with least-squares elliptical fitting

White, Adam D.; McHale, G. Brent; Goerz, David A.; Speer, Ron D.

2010-10-01

A method of analyzing Faraday rotation data from pulsed magnetic field measurements is described. The method uses direct least-squares elliptical fitting to measured data. The least-squares fit conic parameters are used to rotate, translate, and rescale the measured data. Interpretation of the transformed data provides improved accuracy and time-resolution characteristics compared with many existing methods of analyzing Faraday rotation data. The method is especially useful when linear birefringence is present at the input or output of the sensing medium, or when the relative angle of the polarizers used in analysis is not aligned with precision; under these circumstances the method is shown to return the analytically correct input signal. The method may be pertinent to other applications where analysis of Lissajous figures is required, such as the velocity interferometer system for any reflector (VISAR) diagnostics. The entire algorithm is fully automated and requires no user interaction. An example of algorithm execution is shown, using data from a fiber-based Faraday rotation sensor on a capacitive discharge experiment.

14. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

NASA Technical Reports Server (NTRS)

Rosipal, Roman; Clancy, Daniel (Technical Monitor)

2002-01-01

This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

15. SENSOP: A Derivative-Free Solver for Nonlinear Least Squares with Sensitivity Scaling

PubMed Central

Chan, I.S.; Goldstein, A.A.; Bassingthwaighte, J.B.

2010-01-01

Nonlinear least squares optimization is used most often in fitting a complex model to a set of data. An ordinary nonlinear least squares optimizer assumes a constant variance for all the data points. This paper presents SENSOP, a weighted nonlinear least squares optimizer, which is designed for fitting a model to a set of data where the variance may or may not be constant. It uses a variant of the Levenberg–Marquardt method to calculate the direction and the length of the step change in the parameter vector. The method for estimating appropriate weighting functions applies generally to 1-dimensional signals and can be used for higher dimensional signals. Sets of multiple tracer outflow dilution curves present special problems because the data encompass three to four orders of magnitude; a fractional power function provides appropriate weighting giving success in parameter estimation despite the wide range. PMID:8116914

16. Frequency domain analysis and synthesis of lumped parameter systems using nonlinear least squares techniques

NASA Technical Reports Server (NTRS)

Hays, J. R.

1969-01-01

Lumped parametric system models are simplified and computationally advantageous in the frequency domain of linear systems. Nonlinear least squares computer program finds the least square best estimate for any number of parameters in an arbitrarily complicated model.

17. STRITERFIT, a least-squares pharmacokinetic curve-fitting package using a programmable calculator.

PubMed

Thornhill, D P; Schwerzel, E

1985-05-01

A program is described that permits iterative least-squares nonlinear regression fitting of polyexponential curves using the Hewlett Packard HP 41 CV programmable calculator. The program enables the analysis of pharmacokinetic drug level profiles with a high degree of precision. Up to 15 data pairs can be used, and initial estimates of curve parameters are obtained with a stripping procedure. Up to four exponential terms can be accommodated by the program, and there is the option of weighting data according to their reciprocals. Initial slopes cannot be forced through zero. The program may be interrupted at any time in order to examine convergence. PMID:3839530

18. Assessing Fit and Dimensionality in Least Squares Metric Multidimensional Scaling Using Akaike's Information Criterion

ERIC Educational Resources Information Center

Ding, Cody S.; Davison, Mark L.

2010-01-01

Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…

19. Iterative least-squares fitting programs in pharmacokinetics for a programmable handheld calculator.

PubMed

Messori, A; Donati-Cori, G; Tendi, E

1983-10-01

Programs that perform a nonlinear least-squares fit to data conforming to one-compartment oral or two-compartment intravenous pharmacokinetic models are described. The programs are designed for use on a Hewlett-Packard HP-41 CV programmable calculator equipped with an extended-functions module and one or two extended-memory modules. Initial estimates of variables in the model are calculated by the method of residuals and then iteratively improved by the use of the Gauss-Newton algorithm as modified by Hartley. This modification minimizes convergence problems. The iterative-fitting procedure includes a routine for estimation of lag time for the one-compartment oral model. Clinical applications of the programs are illustrated using previously published data. Programming steps and user instructions are listed. The programs provide an efficient and inexpensive method of estimating pharmacokinetic variables. PMID:6688925

20. Using R^2 to compare least-squares fit models: When it must fail

Technology Transfer Automated Retrieval System (TEKTRAN)

R^2 can be used correctly to select from among competing least-squares fit models when the data are fitted in common form and with common weighting. However, then R^2 comparisons become equivalent to comparisons of the estimated fit variance s^2 in unweighted fitting, or of the reduced chi-square in...

1. The Recovery of Weak Impulsive Signals Based on Stochastic Resonance and Moving Least Squares Fitting

PubMed Central

Jiang, Kuosheng.; Xu, Guanghua.; Liang, Lin.; Tao, Tangfei.; Gu, Fengshou.

2014-01-01

In this paper a stochastic resonance (SR)-based method for recovering weak impulsive signals is developed for quantitative diagnosis of faults in rotating machinery. It was shown in theory that weak impulsive signals follow the mechanism of SR, but the SR produces a nonlinear distortion of the shape of the impulsive signal. To eliminate the distortion a moving least squares fitting method is introduced to reconstruct the signal from the output of the SR process. This proposed method is verified by comparing its detection results with that of a morphological filter based on both simulated and experimental signals. The experimental results show that the background noise is suppressed effectively and the key features of impulsive signals are reconstructed with a good degree of accuracy, which leads to an accurate diagnosis of faults in roller bearings in a run-to failure test. PMID:25076220

2. Characterization of Titan 3-D acoustic pressure spectra by least-squares fit to theoretical model

Hartnett, E. B.; Carleen, E.

1980-01-01

A theoretical model for the acoustic spectra of undeflected rocket plumes is fitted to computed spectra of a Titan III-D at varying times after ignition, by a least-squares method. Tests for the goodness of the fit are made.

3. Parameter identification of Jiles-Atherton model with nonlinear least-square method

Kis, Péter; Iványi, Amália

2004-01-01

A new method to the parameter identification of the widely used scalar Jiles-Atherton (J-A) model of hysteresis is detailed in this paper. The extended J-A model is also investigated including the eddy-current and the anomalous loss terms, which are taken into account by modeling the frequency dependence of the hysteresis. The five parameters of the classical J-A model can be determined from low-frequency hysteresis measurement. At higher frequency the effect of the eddy currents is not negligible, the J-A model must be extended. The loss of the hysteresis characteristics and the coercitive field are increasing with the frequency. Nonlinear least-squares method is used for parameter fitting of classical and extended J-A model, as well. The curve fitting is executed automatically based on the initial parameters and the measured data.

4. Constrained hierarchical least square nonlinear equation solvers. [for indefinite stiffness and large structural deformations

NASA Technical Reports Server (NTRS)

1986-01-01

The current paper develops a constrained hierarchical least square nonlinear equation solver. The procedure can handle the response behavior of systems which possess indefinite tangent stiffness characteristics. Due to the generality of the scheme, this can be achieved at various hierarchical application levels. For instance, in the case of finite element simulations, various combinations of either degree of freedom, nodal, elemental, substructural, and global level iterations are possible. Overall, this enables a solution methodology which is highly stable and storage efficient. To demonstrate the capability of the constrained hierarchical least square methodology, benchmarking examples are presented which treat structure exhibiting highly nonlinear pre- and postbuckling behavior wherein several indefinite stiffness transitions occur.

5. Optimal Knot Selection for Least-squares Fitting of Noisy Data with Spline Functions

SciTech Connect

Jerome Blair

2008-05-15

An automatic data-smoothing algorithm for data from digital oscilloscopes is described. The algorithm adjusts the bandwidth of the filtering as a function of time to provide minimum mean squared error at each time. It produces an estimate of the root-mean-square error as a function of time and does so without any statistical assumptions about the unknown signal. The algorithm is based on least-squares fitting to the data of cubic spline functions.

6. A method for obtaining a least squares fit of a hyperplane to uncertain data

SciTech Connect

Reister, D.B.; Morris, M.D.

1994-05-01

For many least squares problems, the uncertainty is in one of the variables [for example, y = f(x) or z = f(x,y)]. However, for some problems, the uncertainty is in the geometric transformation from measured data to Cartesian coordinates and all of the calculated variables are uncertain. When we seek the best least squares fit of a hyperplane to the data, we obtain an over determined system (we have n + l equations to determine n unknowns). By neglecting one of the equations at a time, we can obtain n + l different solutions for the unknown parameters. However, we cannot average the n + l hyperplanes to obtain a single best estimate. To obtain a solution without neglecting any of the equations, we solve an eigenvalue problem and use the eigenvector associated with the smallest eigenvalue to determine the unknown parameters. We have performed numerical experiments that compare our eigenvalue method to the approach of neglecting one equation at a time.

7. A new algorithm for constrained nonlinear least-squares problems, part 1

NASA Technical Reports Server (NTRS)

Hanson, R. J.; Krogh, F. T.

1983-01-01

A Gauss-Newton algorithm is presented for solving nonlinear least squares problems. The problem statement may include simple bounds or more general constraints on the unknowns. The algorithm uses a trust region that allows the objective function to increase with logic for retreating to best values. The computations for the linear problem are done using a least squares system solver that allows for simple bounds and linear constraints. The trust region limits are defined by a box around the current point. In its current form the algorithm is effective only for problems with small residuals, linear constraints and dense Jacobian matrices. Results on a set of test problems are encouraging.

8. On the Least-Squares Fitting of Correlated Data: a Priorivs a PosterioriWeighting

Tellinghuisen, Joel

1996-10-01

One of the methods in common use for analyzing large data sets is a two-step procedure, in which subsets of the full data are first least-squares fitted to a preliminary set of parameters, and the latter are subsequently merged to yield the final parameters. The second step of this procedure is properly a correlated least-squares fit and requires the variance-covariance matrices from the first step to construct the weight matrix for the merge. There is, however, an ambiguity concerning the manner in which the first-step variance-covariance matrices are assessed, which leads to different statistical properties for the quantities determined in the merge. The issue is one ofa priorivsa posterioriassessment of weights, which is an application of what was originally calledinternalvsexternal consistencyby Birge [Phys. Rev.40,207-227 (1932)] and Deming ("Statistical Adjustment of Data." Dover, New York, 1964). In the present work the simplest case of a merge fit-that of an average as obtained from a global fit vs a two-step fit of partitioned data-is used to illustrate that only in the case of a priori weighting do the results have the usually expected and desired statistical properties: normal distributions for residuals,tdistributions for parameters assessed a posteriori, and χ2distributions for variances.

9. Theoretic Fit and Empirical Fit: The Performance of Maximum Likelihood versus Generalized Least Squares Estimation in Structural Equation Models.

ERIC Educational Resources Information Center

Olsson, Ulf Henning; Troye, Sigurd Villads; Howell, Roy D.

1999-01-01

Used simulation to compare the ability of maximum likelihood (ML) and generalized least-squares (GLS) estimation to provide theoretic fit in models that are parsimonious representations of a true model. The better empirical fit obtained for GLS, compared with ML, was obtained at the cost of lower theoretic fit. (Author/SLD)

10. Optimization of Active Muscle Force-Length Models Using Least Squares Curve Fitting.

PubMed

Mohammed, Goran Abdulrahman; Hou, Ming

2016-03-01

The objective of this paper is to propose an asymmetric Gaussian function as an alternative to the existing active force-length models, and to optimize this model along with several other existing models by using the least squares curve fitting method. The minimal set of coefficients is identified for each of these models to facilitate the least squares curve fitting. Sarcomere simulated data and one set of rabbits extensor digitorum II experimental data are used to illustrate optimal curve fitting of the selected force-length functions. The results shows that all the curves fit reasonably well with the simulated and experimental data, while the Gordon-Huxley-Julian model and asymmetric Gaussian function are better than other functions in terms of statistical test scores root mean squared error and R-squared. However, the differences in RMSE scores are insignificant (0.3-6%) for simulated data and (0.2-5%) for experimental data. The proposed asymmetric Gaussian model and the method of parametrization of this and the other force-length models mentioned above can be used in the studies on active force-length relationships of skeletal muscles that generate forces to cause movements of human and animal bodies. PMID:26276984

11. An Alternating Least Squares Algorithm for Fitting the Two- and Three-Way DEDICOM Model and the IDIOSCAL Model.

ERIC Educational Resources Information Center

Kiers, Henk A. L.

1989-01-01

An alternating least squares algorithm is offered for fitting the DEcomposition into DIrectional COMponents (DEDICOM) model for representing asymmetric relations among a set of objects via a set of coordinates for the objects on a limited number of dimensions. An algorithm is presented for fitting the IDIOSCAL model in the least squares sense.…

12. LOGISTIC FUNCTION PROFILE FIT: A least-squares program for fitting interface profiles to an extended logistic function

SciTech Connect

Kirchhoff, William H.

2012-09-15

The extended logistic function provides a physically reasonable description of interfaces such as depth profiles or line scans of surface topological or compositional features. It describes these interfaces with the minimum number of parameters, namely, position, width, and asymmetry. Logistic Function Profile Fit (LFPF) is a robust, least-squares fitting program in which the nonlinear extended logistic function is linearized by a Taylor series expansion (equivalent to a Newton-Raphson approach) with no apparent introduction of bias in the analysis. The program provides reliable confidence limits for the parameters when systematic errors are minimal and provides a display of the residuals from the fit for the detection of systematic errors. The program will aid researchers in applying ASTM E1636-10, 'Standard practice for analytically describing sputter-depth-profile and linescan-profile data by an extended logistic function,' and may also prove useful in applying ISO 18516: 2006, 'Surface chemical analysis-Auger electron spectroscopy and x-ray photoelectron spectroscopy-determination of lateral resolution.' Examples are given of LFPF fits to a secondary ion mass spectrometry depth profile, an Auger surface line scan, and synthetic data generated to exhibit known systematic errors for examining the significance of such errors to the extrapolation of partial profiles.

13. The derivation of vector magnetic fields from Stokes profiles - Integral versus least squares fitting techniques

NASA Technical Reports Server (NTRS)

Ronan, R. S.; Mickey, D. L.; Orrall, F. Q.

1987-01-01

The results of two methods for deriving photospheric vector magnetic fields from the Zeeman effect, as observed in the Fe I line at 6302.5 A at high spectral resolution (45 mA), are compared. The first method does not take magnetooptical effects into account, but determines the vector magnetic field from the integral properties of the Stokes profiles. The second method is an iterative least-squares fitting technique which fits the observed Stokes profiles to the profiles predicted by the Unno-Rachkovsky solution to the radiative transfer equation. For sunspot fields above about 1500 gauss, the two methods are found to agree in derived azimuthal and inclination angles to within about + or - 20 deg.

14. Error Estimates Derived from the Data for Least-Squares Spline Fitting

SciTech Connect

Jerome Blair

2007-06-25

The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

15. Phase aberration compensation of digital holographic microscopy based on least squares surface fitting

Di, Jianglei; Zhao, Jianlin; Sun, Weiwei; Jiang, Hongzhen; Yan, Xiaobo

2009-10-01

Digital holographic microscopy allows the numerical reconstruction of the complex wavefront of samples, especially biological samples such as living cells. In digital holographic microscopy, a microscope objective is introduced to improve the transverse resolution of the sample; however a phase aberration in the object wavefront is also brought along, which will affect the phase distribution of the reconstructed image. We propose here a numerical method to compensate for the phase aberration of thin transparent objects with a single hologram. The least squares surface fitting with points number less than the matrix of the original hologram is performed on the unwrapped phase distribution to remove the unwanted wavefront curvature. The proposed method is demonstrated with the samples of the cicada wings and epidermal cells of garlic, and the experimental results are consistent with that of the double exposure method.

16. Metafitting: Weight optimization for least-squares fitting of PTTI data

NASA Technical Reports Server (NTRS)

Douglas, Rob J.; Boulanger, J.-S.

1995-01-01

For precise time intercomparisons between a master frequency standard and a slave time scale, we have found it useful to quantitatively compare different fitting strategies by examining the standard uncertainty in time or average frequency. It is particularly useful when designing procedures which use intermittent intercomparisons, with some parameterized fit used to interpolate or extrapolate from the calibrating intercomparisons. We use the term 'metafitting' for the choices that are made before a fitting procedure is operationally adopted. We present methods for calculating the standard uncertainty for general, weighted least-squares fits and a method for optimizing these weights for a general noise model suitable for many PTTI applications. We present the results of the metafitting of procedures for the use of a regular schedule of (hypothetical) high-accuracy frequency calibration of a maser time scale. We have identified a cumulative series of improvements that give a significant reduction of the expected standard uncertainty, compared to the simplest procedure of resetting the maser synthesizer after each calibration. The metafitting improvements presented include the optimum choice of weights for the calibration runs, optimized over a period of a week or 10 days.

17. AVIRIS study of Death Valley evaporite deposits using least-squares band-fitting methods

NASA Technical Reports Server (NTRS)

Crowley, J. K.; Clark, R. N.

1992-01-01

Minerals found in playa evaporite deposits reflect the chemically diverse origins of ground waters in arid regions. Recently, it was discovered that many playa minerals exhibit diagnostic visible and near-infrared (0.4-2.5 micron) absorption bands that provide a remote sensing basis for observing important compositional details of desert ground water systems. The study of such systems is relevant to understanding solute acquisition, transport, and fractionation processes that are active in the subsurface. Observations of playa evaporites may also be useful for monitoring the hydrologic response of desert basins to changing climatic conditions on regional and global scales. Ongoing work using Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data to map evaporite minerals in the Death Valley salt pan is described. The AVIRIS data point to differences in inflow water chemistry in different parts of the Death Valley playa system and have led to the discovery of at least two new North American mineral occurrences. Seven segments of AVIRIS data were acquired over Death Valley on 31 July 1990, and were calibrated to reflectance by using the spectrum of a uniform area of alluvium near the salt pan. The calibrated data were subsequently analyzed by using least-squares spectral band-fitting methods, first described by Clark and others. In the band-fitting procedure, AVIRIS spectra are fit compared over selected wavelength intervals to a series of library reference spectra. Output images showing the degree of fit, band depth, and fit times the band depth are generated for each reference spectrum. The reference spectra used in the study included laboratory data for 35 pure evaporite spectra extracted from the AVIRIS image cube. Additional details of the band-fitting technique are provided by Clark and others elsewhere in this volume.

18. [Compensation-fitting extraction of dynamic spectrum based on least squares method].

PubMed

Lin, Ling; Wu, Ruo-Nan; Li, Yong-Cheng; Zhou, Mei; Li, Gang

2014-07-01

Extraction method for dynamic spectrum (DS) with a high signal to noise ratio is a key to achieving high-precision noninvasive detection of blood components. In order to further improve the accuracy and speed of DS extraction, linear similarity between photoelectric plethysmographys (PPG) at each two different wavelengths was analyzed in principle, and an experimental verification was conducted. Based on this property, the method of compensation-fitting extraction was proposed. Firstly, the baseline of PPG at each wavelength is estimated and compensated using single-period sampling average, which would remove the effect of baseline drift caused by motion artifact. Secondly, the slope of least squares fitting between each single-wavelength PPG and full-wavelength averaged PPG is acquired to construct DS, which would significantly suppress random noise. Contrast experiments were conducted on 25 samples in NIR wave band and Vis wave band respectively. Flatness and processing time of DS using compensation-fitting extraction were compared with that using single-trial estimation. In NIR band, the average variance using compensation-fitting estimation was 69.0% of that using single-trial estimation, and in Vis band it was 57.4%, which shows that the flatness of DS is steadily improved. In NIR band, the data processing time using compensation-fitting extraction could be reduced to 10% of that using single-trial estimation, and in Vis band it was 20%, which shows that the time for data processing is significantly reduced. Experimental results show that, compared with single-trial estimation method, dynamic spectrum compensation-fitting extraction could steadily improve the signal to noise ratio of DS, significantly improve estimation quality, reduce data processing time, and simplify procedure. Therefore, this new method is expected to promote the development of noninvasive blood components measurement. PMID:25269319

19. Numerical solution of a nonlinear least squares problem in digital breast tomosynthesis

Landi, G.; Loli Piccolomini, E.; Nagy, J. G.

2015-11-01

In digital tomosynthesis imaging, multiple projections of an object are obtained along a small range of different incident angles in order to reconstruct a pseudo-3D representation (i.e., a set of 2D slices) of the object. In this paper we describe some mathematical models for polyenergetic digital breast tomosynthesis image reconstruction that explicitly takes into account various materials composing the object and the polyenergetic nature of the x-ray beam. A polyenergetic model helps to reduce beam hardening artifacts, but the disadvantage is that it requires solving a large-scale nonlinear ill-posed inverse problem. We formulate the image reconstruction process (i.e., the method to solve the ill-posed inverse problem) in a nonlinear least squares framework, and use a Levenberg-Marquardt scheme to solve it. Some implementation details are discussed, and numerical experiments are provided to illustrate the performance of the methods.

20. Determination of glucose concentration based on pulsed laser induced photoacoustic technique and least square fitting algorithm

Ren, Zhong; Liu, Guodong; Huang, Zhen

2015-08-01

In this paper, a noninvasive glucose concentration monitoring setup based on the photoacoustic technique was established. In this setup, a 532nm pumped Q switched Nd: YAG tunable pulsed laser with repetition rate of 20Hz was used as the photoacoustic excitation light source, and a ultrasonic transducer with central response frequency of 9.55MHz was used as the detector of the photoacoustic signal of glucose. As the preliminary exploration of the blood glucose concentration, a series of in vitro photoacoustic monitoring of glucose aqueous solutions by using the established photoacoustic setup were performed. The photoacoustic peak-to-peak values of different concentrations of glucose aqueous solutions induced by the pulsed laser with output wavelength of 1300nm to 2300nm in interval of 10nm were obtained with the average times of 512. The differential spectral and the first order derivative spectral method were used to get the characteristic wavelengths. For the characteristic wavelengths of glucose, the least square fitting algorithm was used to establish the relationship between the glucose concentrations and photoacoustic peak-to-peak values. The characteristic wavelengths and the predicted concentrations of glucose solution were obtained. Experimental results demonstrated that the prediction effect of characteristic wavelengths of 1410nm and 1510nm were better than others, and this photoacoustic setup and analysis method had a certain potential value in the monitoring of the blood glucose concentration.

1. Comparison and Analysis of Nonlinear Least Squares Methods for Vision Based Navigation (vbn) Algorithms

Sheta, B.; Elhabiby, M.; Sheimy, N.

2012-07-01

A robust scale and rotation invariant image matching algorithm is vital for the Visual Based Navigation (VBN) of aerial vehicles, where matches between an existing geo-referenced database images and the real-time captured images are used to georeference (i.e. six transformation parameters - three rotation and three translation) the real-time captured image from the UAV through the collinearity equations. The georeferencing information is then used in aiding the INS integration Kalman filter as Coordinate UPdaTe (CUPT). It is critical for the collinearity equations to use the proper optimization algorithm to ensure accurate and fast convergence for georeferencing parameters with the minimum required conjugate points necessary for convergence. Fast convergence to a global minimum will require non-linear approach to overcome the high degree of non-linearity that will exist in case of having large oblique images (i.e. large rotation angles).The main objective of this paper is investigating the estimation of the georeferencing parameters necessary for VBN of aerial vehicles in case of having large values of the rotational angles, which will lead to non-linearity of the estimation model. In this case, traditional least squares approaches will fail to estimate the georeferencing parameters, because of the expected non-linearity of the mathematical model. Five different nonlinear least squares methods are presented for estimating the transformation parameters. Four gradient based nonlinear least squares methods (Trust region, Trust region dogleg algorithm, Levenberg-Marquardt, and Quasi-Newton line search method) and one non-gradient method (Nelder-Mead simplex direct search) is employed for the six transformation parameters estimation process. The research was done on simulated data and the results showed that the Nelder-Mead method has failed because of its dependency on the objective function without any derivative information. Although, the tested gradient methods

2. Analysis of magnetic measurement data by least squares fit to series expansion solution of 3-D Laplace equation

SciTech Connect

Blumberg, L.N.

1992-03-01

The authors have analyzed simulated magnetic measurements data for the SXLS bending magnet in a plane perpendicular to the reference axis at the magnet midpoint by fitting the data to an expansion solution of the 3-dimensional Laplace equation in curvilinear coordinates as proposed by Brown and Servranckx. The method of least squares is used to evaluate the expansion coefficients and their uncertainties, and compared to results from an FFT fit of 128 simulated data points on a 12-mm radius circle about the reference axis. They find that the FFT method gives smaller coefficient uncertainties that the Least Squares method when the data are within similar areas. The Least Squares method compares more favorably when a larger number of data points are used within a rectangular area of 30-mm vertical by 60-mm horizontal--perhaps the largest area within the 35-mm x 75-mm vacuum chamber for which data could be obtained. For a grid with 0.5-mm spacing within the 30 x 60 mm area the Least Squares fit gives much smaller uncertainties than the FFT. They are therefore in the favorable position of having two methods which can determine the multipole coefficients to much better accuracy than the tolerances specified to General Dynamics. The FFT method may be preferable since it requires only one Hall probe rather than the four envisioned for the least squares grid data. However least squares can attain better accuracy with fewer probe movements. The time factor in acquiring the data will likely be the determining factor in choice of method. They should further explore least squares analysis of a Fourier expansion of data on a circle or arc of a circle since that method gives coefficient uncertainties without need for multiple independent sets of data as needed by the FFT method.

3. Multiparameter linear least-squares fitting to Poisson data one count at a time

NASA Technical Reports Server (NTRS)

Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.

1995-01-01

A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number n(sub i) of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts w(sub i) in the weighted LLSQ method when square root of n(sub i) instead of square root of bar-n(sub i) is used to approximate the uncertainties, sigma(sub i), in the data, where bar-n(sub i) = E(n(sub i)), the expected value of N(sub i). We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 ke

4. A Nonlinear Least Squares Approach to Time of Death Estimation Via Body Cooling.

PubMed

Rodrigo, Marianito R

2016-01-01

The problem of time of death (TOD) estimation by body cooling is revisited by proposing a nonlinear least squares approach that takes as input a series of temperature readings only. Using a reformulation of the Marshall-Hoare double exponential formula and a technique for reducing the dimension of the state space, an error function that depends on the two cooling rates is constructed, with the aim of minimizing this function. Standard nonlinear optimization methods that are used to minimize the bivariate error function require an initial guess for these unknown rates. Hence, a systematic procedure based on the given temperature data is also proposed to determine an initial estimate for the rates. Then, an explicit formula for the TOD is given. Results of numerical simulations using both theoretical and experimental data are presented, both yielding reasonable estimates. The proposed procedure does not require knowledge of the temperature at death nor the body mass. In fact, the method allows the estimation of the temperature at death once the cooling rates and the TOD have been calculated. The procedure requires at least three temperature readings, although more measured readings could improve the estimates. With the aid of computerized recording and thermocouple detectors, temperature readings spaced 10-15 min apart, for example, can be taken. The formulas can be straightforwardly programmed and installed on a hand-held device for field use. PMID:26213145

5. Evaluation of unconfined-aquifer parameters from pumping test data by nonlinear least squares

USGS Publications Warehouse

Heidari, M.; Moench, A.

1997-01-01

Nonlinear least squares (NLS) with automatic differentiation was used to estimate aquifer parameters from drawdown data obtained from published pumping tests conducted in homogeneous, water-table aquifers. The method is based on a technique that seeks to minimize the squares of residuals between observed and calculated drawdown subject to bounds that are placed on the parameter of interest. The analytical model developed by Neuman for flow to a partially penetrating well of infinitesimal diameter situated in an infinite, homogeneous and anisotropic aquifer was used to obtain calculated drawdown. NLS was first applied to synthetic drawdown data from a hypothetical but realistic aquifer to demonstrate that the relevant hydraulic parameters (storativity, specific yield, and horizontal and vertical hydraulic conductivity) can be evaluated accurately. Next the method was used to estimate the parameters at three field sites with widely varying hydraulic properties. NLS produced unbiased estimates of the aquifer parameters that are close to the estimates obtained with the same data using a visual curve-matching approach. Small differences in the estimates are a consequence of subjective interpretation introduced in the visual approach.

6. A Nonlinear Adaptive Beamforming Algorithm Based on Least Squares Support Vector Regression

PubMed Central

Wang, Lutao; Jin, Gang; Li, Zhengzhou; Xu, Hongbin

2012-01-01

To overcome the performance degradation in the presence of steering vector mismatches, strict restrictions on the number of available snapshots, and numerous interferences, a novel beamforming approach based on nonlinear least-square support vector regression machine (LS-SVR) is derived in this paper. In this approach, the conventional linearly constrained minimum variance cost function used by minimum variance distortionless response (MVDR) beamformer is replaced by a squared-loss function to increase robustness in complex scenarios and provide additional control over the sidelobe level. Gaussian kernels are also used to obtain better generalization capacity. This novel approach has two highlights, one is a recursive regression procedure to estimate the weight vectors on real-time, the other is a sparse model with novelty criterion to reduce the final size of the beamformer. The analysis and simulation tests show that the proposed approach offers better noise suppression capability and achieve near optimal signal-to-interference-and-noise ratio (SINR) with a low computational burden, as compared to other recently proposed robust beamforming techniques.

7. Signs of divided differences yield least squares data fitting with constrained monotonicity or convexity

Demetriou, I. C.

2002-09-01

Methods are presented for least squares data smoothing by using the signs of divided differences of the smoothed values. Professor M.J.D. Powell initiated the subject in the early 1980s and since then, theory, algorithms and FORTRAN software make it applicable to several disciplines in various ways. Let us consider n data measurements of a univariate function which have been altered by random errors. Then it is usual for the divided differences of the measurements to show sign alterations, which are probably due to data errors. We make the least sum of squares change to the measurements, by requiring the sequence of divided differences of order m to have at most q sign changes for some prescribed integer q. The positions of the sign changes are integer variables of the optimization calculation, which implies a combinatorial problem whose solution can require about O(nq) quadratic programming calculations in n variables and n-m constraints. Suitable methods have been developed for the following cases. It has been found that a dynamic programming procedure can calculate the global minimum for the important cases of piecewise monotonicity m=1,q[greater-or-equal, slanted]1 and piecewise convexity/concavity m=2,q[greater-or-equal, slanted]1 of the smoothed values. The complexity of the procedure in the case of m=1 is O(n2+qn log2 n) computer operations, while it is reduced to only O(n) when q=0 (monotonicity) and q=1 (increasing/decreasing monotonicity). The case m=2,q[greater-or-equal, slanted]1 requires O(qn2) computer operations and n2 quadratic programming calculations, which is reduced to one and n-2 quadratic programming calculations when m=2,q=0, i.e. convexity, and m=2,q=1, i.e. convexity/concavity, respectively. Unfortunately, the technique that receives this efficiency cannot generalize for the highly nonlinear case m[greater-or-equal, slanted]3,q[greater-or-equal, slanted]2. However, the case m[greater-or-equal, slanted]3,q=0 is solved by a special strictly

8. Bounds on least-squares four-parameter sine-fit errors due to harmonic distortion and noise

SciTech Connect

Deyst, J.P.; Souders, T.M.; Solomon, O.M.

1994-03-01

Least-squares sine-fit algorithms are used extensively in signal processing applications. The parameter estimates produced by such algorithms are subject to both random and systematic errors when the record of input samples consists of a fundamental sine wave corrupted by harmonic distortion or noise. The errors occur because, in general, such sine-fits will incorporate a portion of the harmonic distortion or noise into their estimate of the fundamental. Bounds are developed for these errors for least-squares four-parameter (amplitude, frequency, phase, and offset) sine-fit algorithms. The errors are functions of the number of periods in the record, the number of samples in the record, the harmonic order, and fundamental and harmonic amplitudes and phases. The bounds do not apply to cases in which harmonic components become aliased.

9. A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting.

PubMed

Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue

2016-01-01

The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals. PMID:27483275

10. Technical Note: Review of methods for linear least-squares fitting of data and application to atmospheric chemistry problems

Cantrell, C. A.

2008-04-01

The representation of data, whether geophysical observations, numerical model output or laboratory results, by a best fit straight line is a routine practice in the geosciences and other fields. While the literature is full of detailed analyses of procedures for fitting straight lines to values with uncertainties, a surprising number of scientists blindly use the standard least squares method, such as found on calculators and in spreadsheet programs, that assumes no uncertainties in the x values. Here, the available procedures for estimating the best fit straight line to data, including those applicable to situations for uncertainties present in both the x and y variables, are reviewed. Representative methods that are presented in the literature for bivariate weighted fits are compared using several sample data sets, and guidance is presented as to when the somewhat more involved iterative methods are required, or when the standard least-squares procedure would be expected to be satisfactory. A spreadsheet-based template is made available that employs one method for bivariate fitting.

11. Technical Note: Review of methods for linear least-squares fitting of data and application to atmospheric chemistry problems

Cantrell, C. A.

2008-09-01

The representation of data, whether geophysical observations, numerical model output or laboratory results, by a best fit straight line is a routine practice in the geosciences and other fields. While the literature is full of detailed analyses of procedures for fitting straight lines to values with uncertainties, a surprising number of scientists blindly use the standard least-squares method, such as found on calculators and in spreadsheet programs, that assumes no uncertainties in the x values. Here, the available procedures for estimating the best fit straight line to data, including those applicable to situations for uncertainties present in both the x and y variables, are reviewed. Representative methods that are presented in the literature for bivariate weighted fits are compared using several sample data sets, and guidance is presented as to when the somewhat more involved iterative methods are required, or when the standard least-squares procedure would be expected to be satisfactory. A spreadsheet-based template is made available that employs one method for bivariate fitting.

12. Improved mapping of radio sources from VLBI data by least-square fit

NASA Technical Reports Server (NTRS)

Rodemich, E. R.

1985-01-01

A method is described for producing improved mapping of radio sources from Very Long Base Interferometry (VLBI) data. The method described is more direct than existing Fourier methods, is often more accurate, and runs at least as fast. The visibility data is modeled here, as in existing methods, as a function of the unknown brightness distribution and the unknown antenna gains and phases. These unknowns are chosen so that the resulting function values are as near as possible to the observed values. If researchers use the radio mapping source deviation to measure the closeness of this fit to the observed values, they are led to the problem of minimizing a certain function of all the unknown parameters. This minimization problem cannot be solved directly, but it can be attacked by iterative methods which we show converge automatically to the minimum with no user intervention. The resulting brightness distribution will furnish the best fit to the data among all brightness distributions of given resolution.

13. Least squares fit of data to hyperbolic dose-response curves using a programmed minicalculator (TI-59).

PubMed

Schiff, J D

1983-05-01

Equations of the Michaelis-Menten form are frequently encountered in a number of areas of biochemical and pharmacological research. A program is presented for use on the programmable TI-59 calculator with added printer which performs an iterative least-squares fit of up to 80 data pairs to this equation and estimates the standard deviations and standard errors of the determined parameters. The program assigns equal weights to errors over the entire data range and is thus appropriate for situations in which data precision is independent of amplitude. PMID:6874133

14. On the sensitivity of a least-squares fit of discretized linear hyperbolic equations to data

Callies, U.; Eppel, D. P.

1995-01-01

Difficulties are investigated which occur when trying to specify a noise-free initial model state as the solution of a variational data assimilation problem. A linear shallow water model is used to investigate the existence and physical basis of the model fit to data. As in this context the shape of the cost function is of crucial importance, the interrelations between the cost function's Hessian and specific model-data configurations are investigated. Special emphasis is put on the influence of the temporal/spatial data distribution and the choice of the scheme used for numerical model integration. It is illustrated how such details may cause intolerable uncertainties for those aspects of the recovered solution that are related to very small eigenvalues of the curvature operator. Due to the shortcomings of descent algorithms, uncontrolled large-amplitude error modes may remain invisible if a limited number of minimization cycles is applied. However, to render the retrieved smooth fields stable with respect to further iterations, prior knowledege has to be taken into account in the cost function definition.

15. A Component Prediction Method for Flue Gas of Natural Gas Combustion Based on Nonlinear Partial Least Squares Method

PubMed Central

Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun

2014-01-01

Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness. PMID:24772020

16. Blending moving least squares techniques with NURBS basis functions for nonlinear isogeometric analysis

Cardoso, Rui P. R.; Cesar de Sa, J. M. A.

2014-06-01

IsoGeometric Analysis (IGA) is increasing its popularity as a new numerical tool for the analysis of structures. IGA provides: (i) the possibility of using higher order polynomials for the basis functions; (ii) the smoothness for contact analysis; (iii) the possibility to operate directly on CAD geometry. The major drawback of IGA is the non-interpolatory characteristic of the basis functions, which adds a difficulty in handling essential boundary conditions. Nevertheless, IGA suffers from the same problems depicted by other methods when it comes to reproduce isochoric and transverse shear strain deformations, especially for low order basis functions. In this work, projection techniques based on the moving least square (MLS) approximations are used to alleviate both the volumetric and the transverse shear lockings in IGA. The main objective is to project the isochoric and transverse shear deformations from lower order subspaces by using the MLS, alleviating in this way the volumetric and the transverse shear locking on the fully-integrated space. Because in IGA different degrees in the approximation functions can be used, different Gauss integration rules can also be employed, making the procedures for locking treatment in IGA very dependent on the degree of the approximation functions used. The blending of MLS with Non-Uniform Rational B-Splines (NURBS) basis functions is a methodology to overcome different locking pathologies in IGA which can be also used for enrichment procedures. Numerical examples for three-dimensional NURBS with only translational degrees of freedom are presented for both shell-type and plane strain structures.

17. TENSOLVE: A software package for solving systems of nonlinear equations and nonlinear least squares problems using tensor methods

SciTech Connect

Bouaricha, A.; Schnabel, R.B.

1996-12-31

This paper describes a modular software package for solving systems of nonlinear equations and nonlinear least squares problems, using a new class of methods called tensor methods. It is intended for small to medium-sized problems, say with up to 100 equations and unknowns, in cases where it is reasonable to calculate the Jacobian matrix or approximate it by finite differences at each iteration. The software allows the user to select between a tensor method and a standard method based upon a linear model. The tensor method models F({ital x}) by a quadratic model, where the second-order term is chosen so that the model is hardly more expensive to form, store, or solve than the standard linear model. Moreover, the software provides two different global strategies, a line search and a two- dimensional trust region approach. Test results indicate that, in general, tensor methods are significantly more efficient and robust than standard methods on small and medium-sized problems in iterations and function evaluations.

18. Simultaneous estimation of plasma parameters from spectroscopic data of neutral helium using least square fitting of CR-model

Jain, Jalaj; Prakash, Ram; Vyas, Gheesa Lal; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana; Halder, Nilanjan; Choyal, Yaduvendra

2015-12-01

In the present work an effort has been made to estimate the plasma parameters simultaneously like—electron density, electron temperature, ground state atom density, ground state ion density and metastable state density from the observed visible spectra of penning plasma discharge (PPD) source using least square fitting. The analysis is performed for the prominently observed neutral helium lines. The atomic data and analysis structure (ADAS) database is used to provide the required collisional-radiative (CR) photon emissivity coefficients (PECs) values under the optical thin plasma condition in the analysis. With this condition the estimated plasma temperature from the PPD is found rather high. It is seen that the inclusion of opacity in the observed spectral lines through PECs and addition of diffusion of neutrals and metastable state species in the CR-model code analysis improves the electron temperature estimation in the simultaneous measurement.

19. Using nonlinear least squares to assess relative expression and its uncertainty in real-time qPCR studies.

PubMed

Tellinghuisen, Joel

2016-03-01

Relative expression ratios are commonly estimated in real-time qPCR studies by comparing the quantification cycle for the target gene with that for a reference gene in the treatment samples, normalized to the same quantities determined for a control sample. For the "standard curve" design, where data are obtained for all four of these at several dilutions, nonlinear least squares can be used to assess the amplification efficiencies (AE) and the adjusted ΔΔCq and its uncertainty, with automatic inclusion of the effect of uncertainty in the AEs. An algorithm is illustrated for the KaleidaGraph program. PMID:26562324

20. Comparison of three newton-like nonlinear least-squares methods for estimating parameters of ground-water flow models

USGS Publications Warehouse

Cooley, R.L.; Hill, M.C.

1992-01-01

Three methods of solving nonlinear least-squares problems were compared for robustness and efficiency using a series of hypothetical and field problems. A modified Gauss-Newton/full Newton hybrid method (MGN/FN) and an analogous method for which part of the Hessian matrix was replaced by a quasi-Newton approximation (MGN/QN) solved some of the problems with appreciably fewer iterations than required using only a modified Gauss-Newton (MGN) method. In these problems, model nonlinearity and a large variance for the observed data apparently caused MGN to converge more slowly than MGN/FN or MGN/QN after the sum of squared errors had almost stabilized. Other problems were solved as efficiently with MGN as with MGN/FN or MGN/QN. Because MGN/FN can require significantly more computer time per iteration and more computer storage for transient problems, it is less attractive for a general purpose algorithm than MGN/QN.

1. Estimation of the free core nutation period by the sliding-window complex least-squares fit method

Zhou, Yonghong; Zhu, Qiang; Salstein, David A.; Xu, Xueqing; Shi, Si; Liao, Xinhao

2016-05-01

Estimation of the free core nutation (FCN) period is a challenging prospect. Mostly, two methods, one direct and one indirect, have been applied in the past to address the problem by analyzing the Earth orientation parameters observed by the very long baseline interferometry. The indirect method estimates the FCN period from resonance effects of the FCN on forced nutation terms, whereas the direct method estimates the FCN period using the Fourier Transform (FT) approach. However, the FCN period estimated by the direct FT technique suffers from the non-stationary characteristics of celestial pole offsets (CPO). In this study, the FCN period is estimated by another direct method, i.e., the sliding-window complex least-squares fit method (SCLF). The estimated values of the FCN period for the full set of 1984.0-2014.0 and four subsets (1984.0-2000.0, 2000.0-2014.0, 1984.0-1990.0, 1990.0-2014.0) range from -428.8 to -434.3 mean solar days. From the FT to the SCLF method, the estimate uncertainty of the FCN period falls from several tens of days to several days. Thus, the SCLF method may serve as an independent direct way to estimate the FCN period, complementing and validating the indirect resonance method that has been frequently used before.

2. A nonlinear least-squares inverse analysis of strike-slip faulting with application to the San Andreas fault

NASA Technical Reports Server (NTRS)

Williams, Charles A.; Richardson, Randall M.

1988-01-01

A nonlinear weighted least-squares analysis was performed for a synthetic elastic layer over a viscoelastic half-space model of strike-slip faulting. Also, an inversion of strain rate data was attempted for the locked portions of the San Andreas fault in California. Based on an eigenvector analysis of synthetic data, it is found that the only parameter which can be resolved is the average shear modulus of the elastic layer and viscoelastic half-space. The other parameters were obtained by performing a suite of inversions for the fault. The inversions on data from the northern San Andreas resulted in predicted parameter ranges similar to those produced by inversions on data from the whole fault.

3. Nonlinear Least Squares Method for Gyros Bias and Attitude Estimation Using Satellite Attitude and Orbit Toolbox for Matlab

Silva, W. R.; Kuga, H. K.; Zanardi, M. C.

2015-10-01

The knowledge of the attitude determination is essential to the safety and control of the satellite and payload, and this involves approaches of nonlinear estimation techniques. Here one focuses on determining the attitude and the gyros drift of a real satellite CBERS-2 (China Brazil Earth Resources Satellite) using simulated measurements provided by propagator PROPAT Satellite Attitude and Orbit Toolbox for Matlab. The method used for the estimation was the Nonlinear Least Squares Estimation (NLSE). The attitude dynamical model is described by nonlinear equations involving the Euler angles. The attitude sensors available are two DSS (Digital Sun Sensor), two IRES (Infra-Red Earth Sensor), and one triad of mechanical gyros. The two IRES give direct measurements of roll and pitch angles with a certain level of error. The two DSS are nonlinear functions of roll, pitch, and yaw attitude angles. Gyros are very important sensors, as they provide direct incremental angles or angular velocities. However gyros present several sources of error of which the drift is the most troublesome. Results show that one can reach accuracies in attitude determination within the prescribed requirements, besides providing estimates of the gyro drifts which can be further used to enhance the gyro error model.

4. Linearized iterative least-squares (LIL): a parameter-fitting algorithm for component separation in multifrequency cosmic microwave background experiments such as Planck

Khatri, Rishi

2015-08-01

We present an efficient algorithm for least-squares parameter fitting, optimized for component separation in multifrequency cosmic microwave background (CMB) experiments. We sidestep some of the problems associated with non-linear optimization by taking advantage of the quasi-linear nature of the foreground model. We demonstrate our algorithm, linearized iterative least-squares (LIL), on the publicly available Planck sky model FFP6 simulations and compare our results with those of other algorithms. We work at full Planck resolution and show that degrading the resolution of all channels to that of the lowest frequency channel is not necessary. Finally, we present results for publicly available Planck data. Our algorithm is extremely fast, fitting six parameters to the seven lowest Planck channels at full resolution (50 million pixels) in less than 160 CPU minutes (or a few minutes running in parallel on a few tens of cores). LIL is therefore easily scalable to future experiments, which may have even higher resolution and more frequency channels. We also, naturally, propagate the uncertainties in different parameters due to noise in the maps, as well as the degeneracies between the parameters, to the final errors in the parameters using the Fisher matrix. One indirect application of LIL could be a front-end for Bayesian parameter fitting to find the maximum likelihood to be used as the starting point for Gibbs sampling. We show that for rare components, such as carbon monoxide emission, present in a small fraction of sky, the optimal approach should combine parameter fitting with model selection. LIL may also be useful in other astrophysical applications that satisfy quasi-linearity criteria.

5. Application of nonlinear least-squares regression to ground-water flow modeling, west-central Florida

USGS Publications Warehouse

Yobbi, D.K.

2000-01-01

A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.

6. Nonlinear-Least-Squares Analysis of Slow-Motion EPR Spectra in One and Two Dimensions Using a Modified Levenberg-Marquardt Algorithm

Budil, David E.; Lee, Sanghyuk; Saxena, Sunil; Freed, Jack H.

The application of the "model trust region" modification of the Levenberg-Marquardt minimization algorithm to the analysis of one-dimensional CW EPR and multidimensional Fourier-transform (FT) EPR spectra especially in the slow-motion regime is described. The dynamic parameters describing the slow motion are obtained from least-squares fitting of model calculations based on the stochastic Liouville equation (SLE) to experimental spectra. The trust-region approach is inherently more efficient than the standard Levenberg-Marquardt algorithm, and the efficiency of the procedure may be further increased by a separation-of-variables method in which a subset of fitting parameters is independently minimized at each iteration, thus reducing the number of parameters to be fitted by nonlinear least squares. A particularly useful application of this method occurs in the fitting of multicomponent spectra, for which it is possible to obtain the relative population of each component by the separation-of-variables method. These advantages, combined with recent improvements in the computational methods used to solve the SLE, have led to an order-of-magnitude reduction in computing time, and have made it possible to carry out interactive, real-time fitting on a laboratory workstation with a graphical interface. Examples of fits to experimental data will be given, including multicomponent CW EPR spectra as well as two- and three-dimensional FT EPR spectra. Emphasis is placed on the analytic information available from the partial derivatives utilized in the algorithm, and how it may be used to estimate the condition and uniqueness of the fit, as well as to estimate confidence limits for the parameters in certain cases.

7. Method of Characteristics Calculations and Computer Code for Materials with Arbitrary Equations of State and Using Orthogonal Polynomial Least Square Surface Fits

NASA Technical Reports Server (NTRS)

Chang, T. S.

1974-01-01

A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.

8. Reliability and uncertainty in the estimation of pKa by least squares nonlinear regression analysis of multiwavelength spectrophotometric pH titration data.

PubMed

Meloun, Milan; Syrový, Tomás; Bordovská, Sylva; Vrána, Ales

2007-02-01

When drugs are poorly soluble then, instead of the potentiometric determination of dissociation constants, pH-spectrophotometric titration can be used along with nonlinear regression of the absorbance response surface data. Generally, regression models are extremely useful for extracting the essential features from a multiwavelength set of data. Regression diagnostics represent procedures for examining the regression triplet (data, model, method) in order to check (a) the data quality for a proposed model; (b) the model quality for a given set of data; and (c) that all of the assumptions used for least squares hold. In the interactive, PC-assisted diagnosis of data, models and estimation methods, the examination of data quality involves the detection of influential points, outliers and high leverages, that cause many problems when regression fitting the absorbance response hyperplane. All graphically oriented techniques are suitable for the rapid estimation of influential points. The reliability of the dissociation constants for the acid drug silybin may be proven with goodness-of-fit tests of the multiwavelength spectrophotometric pH-titration data. The uncertainty in the measurement of the pK (a) of a weak acid obtained by the least squares nonlinear regression analysis of absorption spectra is calculated. The procedure takes into account the drift in pH measurement, the drift in spectral measurement, and all of the drifts in analytical operations, as well as the relative importance of each source of uncertainty. The most important source of uncertainty in the experimental set-up for the example is the uncertainty in the pH measurement. The influences of various sources of uncertainty on the accuracy and precision are discussed using the example of the mixed dissociation constants of silybin, obtained using the SQUAD(84) and SPECFIT/32 regression programs. PMID:17216158

9. Beam-hardening correction by a surface fitting and phase classification by a least square support vector machine approach for tomography images of geological samples

Khan, F.; Enzmann, F.; Kersten, M.

2015-12-01

In X-ray computed microtomography (μXCT) image processing is the most important operation prior to image analysis. Such processing mainly involves artefact reduction and image segmentation. We propose a new two-stage post-reconstruction procedure of an image of a geological rock core obtained by polychromatic cone-beam μXCT technology. In the first stage, the beam-hardening (BH) is removed applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. The final BH-corrected image is extracted from the residual data, or the difference between the surface elevation values and the original grey-scale values. For the second stage, we propose using a least square support vector machine (a non-linear classifier algorithm) to segment the BH-corrected data as a pixel-based multi-classification task. A combination of the two approaches was used to classify a complex multi-mineral rock sample. The Matlab code for this approach is provided in the Appendix. A minor drawback is that the proposed segmentation algorithm may become computationally demanding in the case of a high dimensional training data set.

10. COMPARISON OF AN INNOVATIVE NONLINEAR ALGORITHM TO CLASSICAL LEAST SQUARES FOR ANALYZING OPEN-PATH FOURIER TRANSFORM INFRARED SPECTRA COLLECTED AT A CONCENTRATED SWINE PRODUCTION FACILITY: JOURNAL ARTICLE

EPA Science Inventory

NRMRL-RTP-P- 568 Childers, J.W., Phillips, W.J., Thompson*, E.L., Harris*, D.B., Kirchgessner*, D.A., Natschke, D.F., and Clayton, M.J. Comparison of an Innovative Nonlinear Algorithm to Classical Least Squares for Analyzing Open-Path Fourier Transform Infrared Spectra Collecte...

11. Least Squares Best Fit Method for the Three Parameter Weibull Distribution: Analysis of Tensile and Bend Specimens with Volume or Surface Flaw Failure

NASA Technical Reports Server (NTRS)

Gross, Bernard

1996-01-01

Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.

12. Bayesian least squares deconvolution

Asensio Ramos, A.; Petit, P.

2015-11-01

Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

13. Smooth Particle Hydrodynamics with nonlinear Moving-Least-Squares WENO reconstruction to model anisotropic dispersion in porous media

Avesani, Diego; Herrera, Paulo; Chiogna, Gabriele; Bellin, Alberto; Dumbser, Michael

2015-06-01

Most numerical schemes applied to solve the advection-diffusion equation are affected by numerical diffusion. Moreover, unphysical results, such as oscillations and negative concentrations, may emerge when an anisotropic dispersion tensor is used, which induces even more severe errors in the solution of multispecies reactive transport. To cope with this long standing problem we propose a modified version of the standard Smoothed Particle Hydrodynamics (SPH) method based on a Moving-Least-Squares-Weighted-Essentially-Non-Oscillatory (MLS-WENO) reconstruction of concentrations. This scheme formulation (called MWSPH) approximates the diffusive fluxes with a Rusanov-type Riemann solver based on high order WENO scheme. We compare the standard SPH with the MWSPH for different a few test cases, considering both homogeneous and heterogeneous flow fields and different anisotropic ratios of the dispersion tensor. We show that, MWSPH is stable and accurate and that it reduces the occurrence of negative concentrations compared to standard SPH. When negative concentrations are observed, their absolute values are several orders of magnitude smaller compared to standard SPH. In addition, MWSPH limits spurious oscillations in the numerical solution more effectively than classical SPH. Convergence analysis shows that MWSPH is computationally more demanding than SPH, but with the payoff a more accurate solution, which in addition is less sensitive to particles position. The latter property simplifies the time consuming and often user dependent procedure to define the initial dislocation of the particles.

14. Uncertainty in least-squares fits to the thermal noise spectra of nanomechanical resonators with applications to the atomic force microscope

SciTech Connect

Sader, John E.; Yousefi, Morteza; Friend, James R.

2014-02-15

Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noise spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.

15. Using Least Squares for Error Propagation

ERIC Educational Resources Information Center

Tellinghuisen, Joel

2015-01-01

The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…

16. [Spectral quantitative analysis by nonlinear partial least squares based on neural network internal model for flue gas of thermal power plant].

PubMed

Cao, Hui; Li, Yao-Jiang; Zhou, Yan; Wang, Yan-Xia

2014-11-01

To deal with nonlinear characteristics of spectra data for the thermal power plant flue, a nonlinear partial least square (PLS) analysis method with internal model based on neural network is adopted in the paper. The latent variables of the independent variables and the dependent variables are extracted by PLS regression firstly, and then they are used as the inputs and outputs of neural network respectively to build the nonlinear internal model by train process. For spectra data of flue gases of the thermal power plant, PLS, the nonlinear PLS with the internal model of back propagation neural network (BP-NPLS), the non-linear PLS with the internal model of radial basis function neural network (RBF-NPLS) and the nonlinear PLS with the internal model of adaptive fuzzy inference system (ANFIS-NPLS) are compared. The root mean square error of prediction (RMSEP) of sulfur dioxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 16.96%, 16.60% and 19.55% than that of PLS, respectively. The RMSEP of nitric oxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 8.60%, 8.47% and 10.09% than that of PLS, respectively. The RMSEP of nitrogen dioxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 2.11%, 3.91% and 3.97% than that of PLS, respectively. Experimental results show that the nonlinear PLS is more suitable for the quantitative analysis of glue gas than PLS. Moreover, by using neural network function which can realize high approximation of nonlinear characteristics, the nonlinear partial least squares method with internal model mentioned in this paper have well predictive capabilities and robustness, and could deal with the limitations of nonlinear partial least squares method with other internal model such as polynomial and spline functions themselves under a certain extent. ANFIS-NPLS has the best performance with the internal model of adaptive fuzzy inference system having ability to learn more and reduce the residuals effectively. Hence, ANFIS-NPLS is an

17. Interpolating moving least-squares methods for fitting potential energy surfaces : computing high-density potential energy surface data from low-density ab initio data points.

SciTech Connect

Dawes, R.; Thompson, D. L.; Guo, Y.; Wagner, A. F.; Minkoff, M.; Chemistry; Univ. of Missouri-Columbia; Oklahoma State Univ.

2007-05-11

A highly accurate and efficient method for molecular global potential energy surface (PES) construction and fitting is demonstrated. An interpolating-moving-least-squares (IMLS)-based method is developed using low-density ab initio Hessian values to compute high-density PES parameters suitable for accurate and efficient PES representation. The method is automated and flexible so that a PES can be optimally generated for classical trajectories, spectroscopy, or other applications. Two important bottlenecks for fitting PESs are addressed. First, high accuracy is obtained using a minimal density of ab initio points, thus overcoming the bottleneck of ab initio point generation faced in applications of modified-Shepard-based methods. Second, high efficiency is also possible (suitable when a huge number of potential energy and gradient evaluations are required during a trajectory calculation). This overcomes the bottleneck in high-order IMLS-based methods, i.e., the high cost/accuracy ratio for potential energy evaluations. The result is a set of hybrid IMLS methods in which high-order IMLS is used with low-density ab initio Hessian data to compute a dense grid of points at which the energy, Hessian, or even high-order IMLS fitting parameters are stored. A series of hybrid methods is then possible as these data can be used for neural network fitting, modified-Shepard interpolation, or approximate IMLS. Results that are indicative of the accuracy, efficiency, and scalability are presented for one-dimensional model potentials as well as for three-dimensional (HCN) and six-dimensional (HOOH) molecular PESs

18. Least Squares Fitting of Perturbed Vibrational Polyads Near the Isomerization Barrier in the S_1 State of C_2H_2

Merer, A. J.; Baraban, J. H.; Changala, P. B.; Field, R. W.

2013-06-01

The S_1 electronic state of acetylene has recently been shown to have two potential minima, corresponding to cis- and trans-bent structures. The trans-bent isomer is the more stable, with the cis-bent isomer lying about 2670 cm^{-1} higher; the barrier to isomerization lies roughly 5000 cm^{-1} above the trans zero-point level. The isomerization coordinate'' (along which the molecule moves to get from the trans minimum to the barrier) is a combination of the ν_3 (trans bending) and ν_6 (cis bending) vibrational normal coordinates, but the spectrum is very confused because the ν_6 vibration interacts strongly with the ν_4 (torsion) vibration through Coriolis and Darling-Dennison resonances. Since the ν_4 and ν_6 fundamental frequencies are almost equal, the bending vibrational structure consists of polyads. At low vibrational energies the polyads where these three vibrations are excited can be fitted by least squares almost to experimental accuracy with a simple model of Coriolis and Darling-Dennison interactions, but at higher energies the huge x_{36} cross-anharmonicity, which is a symptom that the levels are approaching the isomerization barrier, progressively destroys the polyad structure; in addition the levels show an increasing even-odd staggering of their K-rotational structures, as predicted by group theory. It is not possible to fit the levels near the barrier with a simple model, though some success has been achieved with extended models. Progress with the fitting of the polyads near the barrier will be reviewed. A. L. Utz, J. D. Tobiason, E. Carrasquillo M., L. J. Sanders and F. F. Crim, J. Chem. Phys. {98}, 2742, 1993.

19. Speciation of Energy Critical Elements in Marine Ferromanganese Crusts and Nodules by Principal Component Analysis and Least-squares fits to XAFS Spectra

Foster, A. L.; Klofas, J. M.; Hein, J. R.; Koschinsky, A.; Bargar, J.; Dunham, R. E.; Conrad, T. A.

2011-12-01

Marine ferromanganese crusts and nodules ("Fe-Mn crusts") are considered a potential mineral resource due to their accumulation of several economically-important elements at concentrations above mean crustal abundances. They are typically composed of intergrown Fe oxyhydroxide and Mn oxide; thicker (older) crusts can also contain carbonate fluorapatite. We used X-ray absorption fine-structure (XAFS) spectroscopy, a molecular-scale structure probe, to determine the speciation of several elements (Te, Bi, Mo, Zr, Pt) in Fe-Mn crusts. As a first step in analysis of this dataset, we have conducted principal component analysis (PCA) of Te K-edge and Mo K-edge, k3-weighted XAFS spectra. The sample set consisted of 12 homogenized, ground Fe-Mn crust samples from 8 locations in the global ocean. One sample was subjected to a chemical leach to selectively remove Mn oxides and the elements associated with it. The samples in the study set contain 50-205 mg/kg Te (average = 88) and 97-802 mg/kg Mo (average = 567). PCAs of background-subtracted, normalized Te K-edge and Mo K-edge XAFS spectra were performed on a data matrix of 12 rows x 122 columns (rows = samples; columns = Te or Mo fluorescence value at each energy step) and results were visualized without rotation. The number of significant components was assessed by the Malinowski indicator function and ability of the components to reconstruct the features (minus noise) of all sample spectra. Two components were significant by these criteria for both Te and Mo PCAs and described a total of 74 and 75% of the total variance, respectively. Reconstruction of potential model compounds by the principal components derived from PCAs on the sample set ("target transformation") provides a means of ranking models in terms of their utility for subsequent linear-combination, least-squares (LCLS) fits (the next step of data analysis). Synthetic end-member models of Te4+, Te6+, and Mo adsorbed to Fe(III) oxyhydroxide and Mn oxide were

20. On the Least-Squares Fitting of Slater-Type Orbitals with Gaussians: Reproduction of the STO-NG Fits Using Microsoft Excel and Maple

ERIC Educational Resources Information Center

Pye, Cory C.; Mercer, Colin J.

2012-01-01

The symbolic algebra program Maple and the spreadsheet Microsoft Excel were used in an attempt to reproduce the Gaussian fits to a Slater-type orbital, required to construct the popular STO-NG basis sets. The successes and pitfalls encountered in such an approach are chronicled. (Contains 1 table and 3 figures.)

1. Solution of a few nonlinear problems in aerodynamics by the finite elements and functional least squares methods. Ph.D. Thesis - Paris Univ.; [mathematical models of transonic flow using nonlinear equations

NASA Technical Reports Server (NTRS)

Periaux, J.

1979-01-01

The numerical simulation of the transonic flows of idealized fluids and of incompressible viscous fluids, by the nonlinear least squares methods is presented. The nonlinear equations, the boundary conditions, and the various constraints controlling the two types of flow are described. The standard iterative methods for solving a quasi elliptical nonlinear equation with partial derivatives are reviewed with emphasis placed on two examples: the fixed point method applied to the Gelder functional in the case of compressible subsonic flows and the Newton method used in the technique of decomposition of the lifting potential. The new abstract least squares method is discussed. It consists of substituting the nonlinear equation by a problem of minimization in a H to the minus 1 type Sobolev functional space.

2. Potential energy surface fitting by a statistically localized, permutationally invariant, local interpolating moving least squares method for the many-body potential: Method and application to N{sub 4}

SciTech Connect

Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V. E-mail: candler@aem.umn.edu; Truhlar, Donald G. E-mail: candler@aem.umn.edu

2014-02-07

Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with a review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.

3. The thermodynamic dissociation constants of four non-steroidal anti-inflammatory drugs by the least-squares nonlinear regression of multiwavelength spectrophotometric pH-titration data.

PubMed

Meloun, Milan; Bordovská, Sylva; Galla, Lubomír

2007-11-30

The mixed dissociation constants of four non-steroidal anti-inflammatory drugs (NSAIDs) ibuprofen, diclofenac sodium, flurbiprofen and ketoprofen at various ionic strengths I of range 0.003-0.155, and at temperatures of 25 degrees C and 37 degrees C, were determined with the use of two different multiwavelength and multivariate treatments of spectral data, SPECFIT/32 and SQUAD(84) nonlinear regression analyses and INDICES factor analysis. The factor analysis in the INDICES program predicts the correct number of components, and even the presence of minor ones, when the data quality is high and the instrumental error is known. The thermodynamic dissociation constant pK(a)(T) was estimated by nonlinear regression of (pK(a), I) data at 25 degrees C and 37 degrees C. Goodness-of-fit tests for various regression diagnostics enabled the reliability of the parameter estimates found to be proven. PALLAS, MARVIN, SPARC, ACD/pK(a) and Pharma Algorithms predict pK(a) being based on the structural formulae of drug compounds in agreement with the experimental value. The best agreement seems to be between the ACD/pK(a) program and experimentally found values and with SPARC. PALLAS and MARVIN predicted pK(a,pred) values with larger bias errors in comparison with the experimental value for all four drugs. PMID:17825517

4. Using Weighted Least Squares Regression for Obtaining Langmuir Sorption Constants

Technology Transfer Automated Retrieval System (TEKTRAN)

One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...

5. Tensor hypercontraction. II. Least-squares renormalization.

PubMed

Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David

2012-12-14

The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)]. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1∕r(12) operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N(5)) effort if exact integrals are decomposed, or O(N(4)) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N(4)) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry. PMID:23248986

6. Tensor hypercontraction. II. Least-squares renormalization

Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David

2012-12-01

The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)], 10.1063/1.4732310. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1/r12 operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N^5) effort if exact integrals are decomposed, or O(N^4) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N^4) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry.

7. Götterdämmerung over total least squares

Malissiovas, G.; Neitzel, F.; Petrovic, S.

2016-06-01

The traditional way of solving non-linear least squares (LS) problems in Geodesy includes a linearization of the functional model and iterative solution of a nonlinear equation system. Direct solutions for a class of nonlinear adjustment problems have been presented by the mathematical community since the 1980s, based on total least squares (TLS) algorithms and involving the use of singular value decomposition (SVD). However, direct LS solutions for this class of problems have been developed in the past also by geodesists. In this contributionwe attempt to establish a systematic approach for direct solutions of non-linear LS problems from a "geodetic" point of view. Therefore, four non-linear adjustment problems are investigated: the fit of a straight line to given points in 2D and in 3D, the fit of a plane in 3D and the 2D symmetric similarity transformation of coordinates. For all these problems a direct LS solution is derived using the same methodology by transforming the problem to the solution of a quadratic or cubic algebraic equation. Furthermore, by applying TLS all these four problems can be transformed to solving the respective characteristic eigenvalue equations. It is demonstrated that the algebraic equations obtained in this way are identical with those resulting from the LS approach. As a by-product of this research two novel approaches are presented for the TLS solutions of fitting a straight line to 3D and the 2D similarity transformation of coordinates. The derived direct solutions of the four considered problems are illustrated on examples from the literature and also numerically compared to published iterative solutions.

8. NLINEAR - NONLINEAR CURVE FITTING PROGRAM

NASA Technical Reports Server (NTRS)

Everhart, J. L.

1994-01-01

A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.

9. Augmented classical least squares multivariate spectral analysis

DOEpatents

Haaland, David M.; Melgaard, David K.

2004-02-03

A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

10. Augmented Classical Least Squares Multivariate Spectral Analysis

DOEpatents

Haaland, David M.; Melgaard, David K.

2005-01-11

A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

11. Augmented Classical Least Squares Multivariate Spectral Analysis

DOEpatents

Haaland, David M.; Melgaard, David K.

2005-07-26

A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

12. Collinearity in Least-Squares Analysis

ERIC Educational Resources Information Center

de Levie, Robert

2012-01-01

How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…

13. Weighted conditional least-squares estimation

SciTech Connect

Booth, J.G.

1987-01-01

A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered.

14. Spacecraft inertia estimation via constrained least squares

NASA Technical Reports Server (NTRS)

Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

2006-01-01

This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

15. A spectral mimetic least-squares method

DOE PAGESBeta

Bochev, Pavel; Gerritsma, Marc

2014-09-01

We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are alsomore » satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.« less

16. A spectral mimetic least-squares method

SciTech Connect

Bochev, Pavel; Gerritsma, Marc

2014-09-01

We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are also satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.

17. The least square optimization in image mosaic

Zhang, Yu-dong; Yang, Yong-yue

2015-02-01

Image registration has been a hot research spot in the computer vision technology and image processing. Image registration is one of the key technologies in image mosaic. In order to improve the accuracy of matching feature points, this paper put forward the least square optimization in image mosaic based on the algorithm of matching similarity of matrices. The correlation coefficient method of matrix is used for matching the module points in the overlap region of images and calculating the error between matrices. The error of feature points can be further minimized by using the method of least square optimization. Finally, image mosaic can be achieved by the two pair of feature points with minimized residual sum of squares. The experimental results demonstrate that the least square optimization in image mosaic can mosaic images with overlap region and improve the accuracy of matching feature points.

18. Computing circles and spheres of arithmitic least squares

Nievergelt, Yves

1994-07-01

A proof of the existence and uniqueness of L. Moura and R. Kitney's circle of least squares leads to estimates of the accuracy with which a computer can determine that circle. The result shows that the accuracy deteriorates as the correlation between the coordinates of the data points increases in magnitude. Yet a numerically more stable computation of eigenvectors yields the limiting straight line, which a further analysis reveals to be the line of total least squares. The same analysis also provides generalizations to fitting spheres in higher dimensions.

19. Kriging and its relation to least squares

SciTech Connect

Oden, N.

1984-11-01

Kriging is a technique for producing contour maps that, under certain conditions, are optimal in a mean squared error sense. The relation of Kriging to Least Squares is reviewed here. New methods for analyzing residuals are suggsted, ML estimators inspected, and an expression derived for calculating cross-validation error. An example using ground water data is provided.

20. Factor Analysis by Generalized Least Squares.

ERIC Educational Resources Information Center

Joreskog, Karl G.; Goldberger, Arthur S.

Aitkin's generalized least squares (GLS) principle, with the inverse of the observed variance-covariance matrix as a weight matrix, is applied to estimate the factor analysis model in the exploratory (unrestricted) case. It is shown that the GLS estimates are scale free and asymptotically efficient. The estimates are computed by a rapidly…

1. Least squares estimation of avian molt rates

USGS Publications Warehouse

Johnson, D.H.

1989-01-01

A straightforward least squares method of estimating the rate at which birds molt feathers is presented, suitable for birds captured more than once during the period of molt. The date of molt onset can also be estimated. The method is applied to male and female mourning doves.

2. Partial least squares for dependent data

PubMed Central

Singer, Marco; Krivobokova, Tatyana; Munk, Axel; de Groot, Bert

2016-01-01

We consider the partial least squares algorithm for dependent data and study the consequences of ignoring the dependence both theoretically and numerically. Ignoring nonstationary dependence structures can lead to inconsistent estimation, but a simple modification yields consistent estimation. A protein dynamics example illustrates the superior predictive power of the proposed method. PMID:27279662

3. Iterative methods for weighted least-squares

SciTech Connect

Bobrovnikova, E.Y.; Vavasis, S.A.

1996-12-31

A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.

4. Parallel block schemes for large scale least squares computations

SciTech Connect

Golub, G.H.; Plemmons, R.J.; Sameh, A.

1986-04-01

Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment of the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.

5. Hybrid least squares multivariate spectral analysis methods

DOEpatents

Haaland, David M.

2002-01-01

A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

6. Least squares restoration of multichannel images

NASA Technical Reports Server (NTRS)

Galatsanos, Nikolas P.; Katsaggelos, Aggelos K.; Chin, Roland T.; Hillery, Allen D.

1991-01-01

Multichannel restoration using both within- and between-channel deterministic information is considered. A multichannel image is a set of image planes that exhibit cross-plane similarity. Existing optimal restoration filters for single-plane images yield suboptimal results when applied to multichannel images, since between-channel information is not utilized. Multichannel least squares restoration filters are developed using the set theoretic and the constrained optimization approaches. A geometric interpretation of the estimates of both filters is given. Color images (three-channel imagery with red, green, and blue components) are considered. Constraints that capture the within- and between-channel properties of color images are developed. Issues associated with the computation of the two estimates are addressed. A spatially adaptive, multichannel least squares filter that utilizes local within- and between-channel image properties is proposed. Experiments using color images are described.

7. Least-squares wave-equation migration/inversion

Kuehl, Henning

This thesis presents an acoustic migration/inversion algorithm that inverts seismic reflection data for the angle dependent subsurface reflectivity by means of least-squares minimization. The method is based on the primary seismic data representation (single scattering approximation) and utilizes one-way wavefield propagators ('wave-equation operators') to compute the Green's functions of the problem. The Green's functions link the measured reflection seismic data to the image points in the earth's interior where an angle dependent imaging condition probes the image point's angular spectrum in depth. The proposed least-squares wave-equation migration minimizes a weighted seismic data misfit function complemented with a model space regularization term. The regularization penalizes discontinuities and rapid amplitude changes in the reflection angle dependent common image gathers---the model space of the inverse problem. 'Roughness' with respect to angle dependence is attributed to seismic data errors (e.g., incomplete and irregular wavefield sampling) which adversely affect the amplitude fidelity of the common image gathers. The least-squares algorithm fits the seismic data taking their variance into account, and, at the same time, imposes some degree of smoothness on the solution. The model space regularization increases amplitude robustness considerably. It mitigates kinematic imaging artifacts and noise while preserving the data consistent smooth angle dependence of the seismic amplitudes. In least-squares migration the seismic modelling operator and the migration operator---the adjoint of modelling---are applied iteratively to minimize the regularized objective function. Whilst least-squares migration/inversion is computationally expensive synthetic data tests show that usually a few iterations suffice for its benefits to take effect. An example from the Gulf of Mexico illustrates the application of least-squares wave-equation migration/inversion to a real

8. Bootstrapping least-squares estimates in biochemical reaction networks.

PubMed

Linder, Daniel F; Rempała, Grzegorz A

2015-01-01

The paper proposes new computational methods of computing confidence bounds for the least-squares estimates (LSEs) of rate constants in mass action biochemical reaction network and stochastic epidemic models. Such LSEs are obtained by fitting the set of deterministic ordinary differential equations (ODEs), corresponding to the large-volume limit of a reaction network, to network's partially observed trajectory treated as a continuous-time, pure jump Markov process. In the large-volume limit the LSEs are asymptotically Gaussian, but their limiting covariance structure is complicated since it is described by a set of nonlinear ODEs which are often ill-conditioned and numerically unstable. The current paper considers two bootstrap Monte-Carlo procedures, based on the diffusion and linear noise approximations for pure jump processes, which allow one to avoid solving the limiting covariance ODEs. The results are illustrated with both in-silico and real data examples from the LINE 1 gene retrotranscription model and compared with those obtained using other methods. PMID:25898769

9. Bootstrapping Least Squares Estimates in Biochemical Reaction Networks

PubMed Central

Linder, Daniel F.

2015-01-01

The paper proposes new computational methods of computing confidence bounds for the least squares estimates (LSEs) of rate constants in mass-action biochemical reaction network and stochastic epidemic models. Such LSEs are obtained by fitting the set of deterministic ordinary differential equations (ODEs), corresponding to the large volume limit of a reaction network, to network’s partially observed trajectory treated as a continuous-time, pure jump Markov process. In the large volume limit the LSEs are asymptotically Gaussian, but their limiting covariance structure is complicated since it is described by a set of nonlinear ODEs which are often ill-conditioned and numerically unstable. The current paper considers two bootstrap Monte-Carlo procedures, based on the diffusion and linear noise approximations for pure jump processes, which allow one to avoid solving the limiting covariance ODEs. The results are illustrated with both in-silico and real data examples from the LINE 1 gene retrotranscription model and compared with those obtained using other methods. PMID:25898769

10. Source allocation by least-squares hydrocarbon fingerprint matching

SciTech Connect

William A. Burns; Stephen M. Mudge; A. Edward Bence; Paul D. Boehm; John S. Brown; David S. Page; Keith R. Parker

2006-11-01

There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS. 15 refs., 5 figs.

11. Source allocation by least-squares hydrocarbon fingerprint matching.

PubMed

Burns, William A; Mudge, Stephen M; Bence, A Edward; Boehm, Paul D; Brown, John S; Page, David S; Parker, Keith R

2006-11-01

There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS. PMID:17144278

12. Total least squares for anomalous change detection

SciTech Connect

Theiler, James P; Matsekh, Anna M

2010-01-01

A family of difference-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQ-based anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and furthermore it is shown to be equivalent to the optimized covariance equalization algorithm. What whitened TLSQ offers, in addition to connecting with a common language the derivations of two of the most popular anomalous change detection algorithms - chronochrome and covariance equalization - is a generalization of these algorithms with the potential for better performance.

13. Constrained least squares estimation incorporating wavefront sensing

Ford, Stephen D.; Welsh, Byron M.; Roggemann, Michael C.

1998-11-01

We address the optimal processing of astronomical images using the deconvolution from wave-front sensing technique (DWFS). A constrained least-squares (CLS) solution which incorporates ensemble-averaged DWFS data is derived using Lagrange minimization. The new estimator requires DWFS data, noise statistics, optical transfer function statistics, and a constraint. The constraint can be chosen such that the algorithm selects a conventional regularization constant automatically. No ad hoc parameter tuning is necessary. The algorithm uses an iterative Newton-Raphson minimization to determine the optimal Lagrange multiplier. Computer simulation of a 1m telescope imaging through atmospheric turbulence is used to test the estimation scheme. CLS object estimates are compared with the corresponding long exposure images. The CLS algorithm provides images with superior resolution and is computationally inexpensive, converging to a solution in less than 10 iterations.

14. Classical least squares multivariate spectral analysis

DOEpatents

Haaland, David M.

2002-01-01

An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.

15. Hybrid least squares multivariate spectral analysis methods

DOEpatents

Haaland, David M.

2004-03-23

A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

16. Vehicle detection using partial least squares.

PubMed

Kembhavi, Aniruddha; Harwood, David; Davis, Larry S

2011-06-01

Detecting vehicles in aerial images has a wide range of applications, from urban planning to visual surveillance. We describe a vehicle detector that improves upon previous approaches by incorporating a very large and rich set of image descriptors. A new feature set called Color Probability Maps is used to capture the color statistics of vehicles and their surroundings, along with the Histograms of Oriented Gradients feature and a simple yet powerful image descriptor that captures the structural characteristics of objects named Pairs of Pixels. The combination of these features leads to an extremely high-dimensional feature set (approximately 70,000 elements). Partial Least Squares is first used to project the data onto a much lower dimensional sub-space. Then, a powerful feature selection analysis is employed to improve the performance while vastly reducing the number of features that must be calculated. We compare our system to previous approaches on two challenging data sets and show superior performance. PMID:20921579

17. Flexible least squares for approximately linear systems

Kalaba, Robert; Tesfatsion, Leigh

1990-10-01

A probability-free multicriteria approach is presented to the problem of filtering and smoothing when prior beliefs concerning dynamics and measurements take an approximately linear form. Consideration is given to applications in the social and biological sciences, where obtaining agreement among researchers regarding probability relations for discrepancy terms is difficult. The essence of the proposed flexible-least-squares (FLS) procedure is the cost-efficient frontier, a curve in a two-dimensional cost plane which provides an explicit and systematic way to determine the efficient trade-offs between the separate costs incurred for dynamic and measurement specification errors. The FLS estimates show how the state vector could have evolved over time in a manner minimally incompatible with the prior dynamic and measurement specifications. A FORTRAN program for implementing the FLS filtering and smoothing procedure for approximately linear systems is provided.

18. On matrix factorization and efficient least squares solution.

Schwarzenberg-Czerny, A.

1995-04-01

Least squares solution of ill conditioned normal equations by Cholesky-Banachiewicz (ChB) factorization suffers from numerical problems related to near singularity and loss of accuracy. We demonstrate that the near singularity does not arise for correctly posed statistical problems. The accuracy loss is also immaterial since for nonlinear least squares the solution by Newton Raphson iterations yields machine accuracy with no regard for accuracy of an individual iteration (Wilkinson 1963). Since this accuracy may not be achieved using singular value decomposition (SVD) without additional iterations for differential corrections and since SVD is more demanding in terms of number of operations and particularly in terms of required memory, we argue that ChB factorization remains the algorithm of choice for least squares. We present a new, very compact implementation in code of Cholesky (1924) and Banachiewicz (1938b) factorization in an elegant form proposed by Banachiewicz (1942). Source listing of the code is provided. We point out that in the same publication Banachiewicz (1938) discovered LU factorization of square matrices before Crout (1941) and rediscovered factorization of the symmetric matrices after Cholesky (1924). Since the two algorithms became confused, no due credit is given to Banachiewicz in modern literature.

Dowling, Eric M.; DeGroat, Ronald D.

1991-12-01

In this paper a recursive total least squares (RTLS) adaptive filter is introduced and studied. The TLS approach is more appropriate and provides more accurate results than the LS approach when there is error on both sides of the adaptive filter equation; for example, linear prediction, AR modeling, and direction finding. The RTLS filter weights are updated in time O(mr) where m is the filter order and r is the dimension of the tracked subspace. In conventional adaptive filtering problems, r equals 1, so that updates can be performed with complexity O(m). The updates are performed by tracking an orthonormal basis for the smaller of the signal or noise subspaces using a computationally efficient subspace tracking algorithm. The filter is shown to outperform both LMS and RLS in terms of tracking and steady state tap weight error norms. It is also more versatile in that it can adapt its weight in the absence of persistent excitation, i.e., when the input data correlation matrix is near rank deficient. Through simulation, the convergence and tracking properties of the filter are presented and compared with LMS and RLS.

20. Simultaneous least squares fitter based on the Lagrange multiplier method

Guan, Ying-Hui; Lü, Xiao-Rui; Zheng, Yang-Heng; Zhu, Yong-Sheng

2013-10-01

We developed a least squares fitter used for extracting expected physics parameters from the correlated experimental data in high energy physics. This fitter considers the correlations among the observables and handles the nonlinearity using linearization during the χ2 minimization. This method can naturally be extended to the analysis with external inputs. By incorporating with Lagrange multipliers, the fitter includes constraints among the measured observables and the parameters of interest. We applied this fitter to the study of the D0-D¯0 mixing parameters as the test-bed based on MC simulation. The test results show that the fitter gives unbiased estimators with correct uncertainties and the approach is credible.

1. Forecasting Istanbul monthly temperature by multivariate partial least square

Ertaç, Mefharet; Firuzan, Esin; Solum, Şenol

2015-07-01

Weather forecasting, especially for temperature, has always been a popular subject since it affects our daily life and always includes uncertainty as statistics does. The goals of this study are (a) to forecast monthly mean temperature by benefitting meteorological variables like temperature, humidity and rainfall; and (b) to improve the forecast ability by evaluating the forecasting errors depending upon the parameter changes and local or global forecasting methods. Approximately 100 years of meteorological data from 54 automatic meteorology observation stations of Istanbul that is the mega city of Turkey are analyzed to infer about the meteorological behaviour of the city. A new partial least square (PLS) forecasting technique based on chaotic analysis is also developed by using nonlinear time series and variable selection methods. The proposed model is also compared with artificial neural networks (ANNs), which model nonlinearly the relation between inputs and outputs by working neurons like human brain. Ordinary least square (OLS), PLS and ANN methods are used for nonlinear time series forecasting in this study. Major findings are the chaotic nature of the meteorological data of Istanbul and the best performance values of the proposed PLS model.

2. On the Significance of Properly Weighting Sorption Data for Least Squares Analysis

Technology Transfer Automated Retrieval System (TEKTRAN)

One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...

3. Kernel-based least squares policy iteration for reinforcement learning.

PubMed

Xu, Xin; Hu, Dewen; Lu, Xicheng

2007-07-01

In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating

4. A least-squares framework for Component Analysis.

PubMed

De la Torre, Fernando

2012-06-01

Over the last century, Component Analysis (CA) methods such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Canonical Correlation Analysis (CCA), Locality Preserving Projections (LPP), and Spectral Clustering (SC) have been extensively used as a feature extraction step for modeling, classification, visualization, and clustering. CA techniques are appealing because many can be formulated as eigen-problems, offering great potential for learning linear and nonlinear representations of data in closed-form. However, the eigen-formulation often conceals important analytic and computational drawbacks of CA techniques, such as solving generalized eigen-problems with rank deficient matrices (e.g., small sample size problem), lacking intuitive interpretation of normalization factors, and understanding commonalities and differences between CA methods. This paper proposes a unified least-squares framework to formulate many CA methods. We show how PCA, LDA, CCA, LPP, SC, and its kernel and regularized extensions correspond to a particular instance of least-squares weighted kernel reduced rank regression (LS--WKRRR). The LS-WKRRR formulation of CA methods has several benefits: 1) provides a clean connection between many CA techniques and an intuitive framework to understand normalization factors; 2) yields efficient numerical schemes to solve CA techniques; 3) overcomes the small sample size problem; 4) provides a framework to easily extend CA methods. We derive weighted generalizations of PCA, LDA, SC, and CCA, and several new CA techniques. PMID:21911913

5. An Incremental Weighted Least Squares Approach to Surface Lights Fields

Coombe, Greg; Lastra, Anselmo

An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.

6. Local validation of EU-DEM using Least Squares Collocation

Ampatzidis, Dimitrios; Mouratidis, Antonios; Gruber, Christian; Kampouris, Vassilios

2016-04-01

In the present study we are dealing with the evaluation of the European Digital Elevation Model (EU-DEM) in a limited area, covering few kilometers. We compare EU-DEM derived vertical information against orthometric heights obtained by classical trigonometric leveling for an area located in Northern Greece. We apply several statistical tests and we initially fit a surface model, in order to quantify the existing biases and outliers. Finally, we implement a methodology for orthometric heights prognosis, using the Least Squares Collocation for the remaining residuals of the first step (after the fitted surface application). Our results, taking into account cross validation points, reveal a local consistency between EU-DEM and official heights, which is better than 1.4 meters.

7. Estimating parameter of influenza transmission using regularized least square

Nuraini, N.; Syukriah, Y.; Indratno, S. W.

2014-02-01

Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.

8. Material characterization via least squares support vector machines

Swaddiwudhipong, S.; Tho, K. K.; Liu, Z. S.; Hua, J.; Ooi, N. S. B.

2005-09-01

Analytical methods to interpret the load indentation curves are difficult to formulate and execute directly due to material and geometric nonlinearities as well as complex contact interactions. In the present study, a new approach based on the least squares support vector machines (LS-SVMs) is adopted in the characterization of materials obeying power law strain-hardening. The input data for training and verification of the LS-SVM model are obtained from 1000 large strain-large deformation finite element analyses which were carried out earlier to simulate indentation tests. The proposed LS-SVM model relates the characteristics of the indentation load-displacement curve directly to the elasto-plastic material properties without resorting to any iterative schemes. The tuned LS-SVM model is able to accurately predict the material properties when presented with new sets of load-indentation curves which were not used in the training and verification of the model.

9. Evaluation of fuzzy inference systems using fuzzy least squares

NASA Technical Reports Server (NTRS)

Barone, Joseph M.

1992-01-01

Efforts to develop evaluation methods for fuzzy inference systems which are not based on crisp, quantitative data or processes (i.e., where the phenomenon the system is built to describe or control is inherently fuzzy) are just beginning. This paper suggests that the method of fuzzy least squares can be used to perform such evaluations. Regressing the desired outputs onto the inferred outputs can provide both global and local measures of success. The global measures have some value in an absolute sense, but they are particularly useful when competing solutions (e.g., different numbers of rules, different fuzzy input partitions) are being compared. The local measure described here can be used to identify specific areas of poor fit where special measures (e.g., the use of emphatic or suppressive rules) can be applied. Several examples are discussed which illustrate the applicability of the method as an evaluation tool.

10. 2-D weighted least-squares phase unwrapping

DOEpatents

Ghiglia, Dennis C.; Romero, Louis A.

1995-01-01

Weighted values of interferometric signals are unwrapped by determining the least squares solution of phase unwrapping for unweighted values of the interferometric signals; and then determining the least squares solution of phase unwrapping for weighted values of the interferometric signals by preconditioned conjugate gradient methods using the unweighted solutions as preconditioning values. An output is provided that is representative of the least squares solution of phase unwrapping for weighted values of the interferometric signals.

11. 2-D weighted least-squares phase unwrapping

DOEpatents

Ghiglia, D.C.; Romero, L.A.

1995-06-13

Weighted values of interferometric signals are unwrapped by determining the least squares solution of phase unwrapping for unweighted values of the interferometric signals; and then determining the least squares solution of phase unwrapping for weighted values of the interferometric signals by preconditioned conjugate gradient methods using the unweighted solutions as preconditioning values. An output is provided that is representative of the least squares solution of phase unwrapping for weighted values of the interferometric signals. 6 figs.

12. Spreadsheet for designing valid least-squares calibrations: A tutorial.

PubMed

Bettencourt da Silva, Ricardo J N

2016-02-01

Instrumental methods of analysis are used to define the price of goods, the compliance of products with a regulation, or the outcome of fundamental or applied research. These methods can only play their role properly if reported information is objective and their quality is fit for the intended use. If measurement results are reported with an adequately small measurement uncertainty both of these goals are achieved. The evaluation of the measurement uncertainty can be performed by the bottom-up approach, that involves a detailed description of the measurement process, or using a pragmatic top-down approach that quantify major uncertainty components from global performance data. The bottom-up approach is not so frequently used due to the need to master the quantification of individual components responsible for random and systematic effects that affect measurement results. This work presents a tutorial that can be easily used by non-experts in the accurate evaluation of the measurement uncertainty of instrumental methods of analysis calibrated using least-squares regressions. The tutorial includes the definition of the calibration interval, the assessments of instrumental response homoscedasticity, the definition of calibrators preparation procedure required for least-squares regression model application, the assessment of instrumental response linearity and the evaluation of measurement uncertainty. The developed measurement model is only applicable in calibration ranges where signal precision is constant. A MS-Excel file is made available to allow the easy application of the tutorial. This tool can be useful for cases where top-down approaches cannot produce results with adequately low measurement uncertainty. An example of the application of this tool to the determination of nitrate in water by ion chromatography is presented. PMID:26653439

13. Orthogonalizing EM: A design-based least squares algorithm

PubMed Central

Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z. G.

2016-01-01

We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p. Supplementary materials for this article are available online. PMID:27499558

14. Least square based method for obtaining one-particle spectral functions from temperature Green functions

Liu, Jun

2013-02-01

A least square based fitting scheme is proposed to extract an optimal one-particle spectral function from any one-particle temperature Green function. It uses the existing non-negative least square (NNLS) fit algorithm to do the fit, and Tikhonov regularization to help with possible numerical singular behaviors. By flexibly adding delta peaks to represent very sharp features of the target spectrum, this scheme guarantees a global minimization of the fitted residue. The performance of this scheme is manifested with diverse physical examples. The proposed scheme is shown to be comparable in performance to the standard Padé analytic continuation scheme.

15. Iterative least squares method for global positioning system

He, Y.; Bilgic, A.

2011-08-01

The efficient implementation of positioning algorithms is investigated for Global Positioning System (GPS). In order to do the positioning, the pseudoranges between the receiver and the satellites are required. The most commonly used algorithm for position computation from pseudoranges is non-linear Least Squares (LS) method. Linearization is done to convert the non-linear system of equations into an iterative procedure, which requires the solution of a linear system of equations in each iteration, i.e. linear LS method is applied iteratively. CORDIC-based approximate rotations are used while computing the QR decomposition for solving the LS problem in each iteration. By choosing accuracy of the approximation, e.g. with a chosen number of optimal CORDIC angles per rotation, the LS computation can be simplified. The accuracy of the positioning results is compared for various numbers of required iterations and various approximation accuracies using real GPS data. The results show that very coarse approximations are sufficient for reasonable positioning accuracy. Therefore, the presented method reduces the computational complexity significantly and is highly suited for hardware implementation.

16. A least squares closure approximation for liquid crystalline polymers

Sievenpiper, Traci Ann

2011-12-01

An introduction to existing closure schemes for the Doi-Hess kinetic theory of liquid crystalline polymers is provided. A new closure scheme is devised based on a least squares fit of a linear combination of the Doi, Tsuji-Rey, Hinch-Leal I, and Hinch-Leal II closure schemes. The orientation tensor and rate-of-strain tensor are fit separately using data generated from the kinetic solution of the Smoluchowski equation. The known behavior of the kinetic solution and existing closure schemes at equilibrium is compared with that of the new closure scheme. The performance of the proposed closure scheme in simple shear flow for a variety of shear rates and nematic polymer concentrations is examined, along with that of the four selected existing closure schemes. The flow phase diagram for the proposed closure scheme under the conditions of shear flow is constructed and compared with that of the kinetic solution. The study of the closure scheme is extended to the simulation of nematic polymers in plane Couette cells. The results are compared with existing kinetic simulations for a Landau-deGennes mesoscopic model with the application of a parameterized closure approximation. The proposed closure scheme is shown to produce a reasonable approximation to the kinetic results in the case of simple shear flow and plane Couette flow.

17. Parsimonious extreme learning machine using recursive orthogonal least squares.

PubMed

Wang, Ning; Er, Meng Joo; Han, Min

2014-10-01

Novel constructive and destructive parsimonious extreme learning machines (CP- and DP-ELM) are proposed in this paper. By virtue of the proposed ELMs, parsimonious structure and excellent generalization of multiinput-multioutput single hidden-layer feedforward networks (SLFNs) are obtained. The proposed ELMs are developed by innovative decomposition of the recursive orthogonal least squares procedure into sequential partial orthogonalization (SPO). The salient features of the proposed approaches are as follows: 1) Initial hidden nodes are randomly generated by the ELM methodology and recursively orthogonalized into an upper triangular matrix with dramatic reduction in matrix size; 2) the constructive SPO in the CP-ELM focuses on the partial matrix with the subcolumn of the selected regressor including nonzeros as the first column while the destructive SPO in the DP-ELM operates on the partial matrix including elements determined by the removed regressor; 3) termination criteria for CP- and DP-ELM are simplified by the additional residual error reduction method; and 4) the output weights of the SLFN need not be solved in the model selection procedure and is derived from the final upper triangular equation by backward substitution. Both single- and multi-output real-world regression data sets are used to verify the effectiveness and superiority of the CP- and DP-ELM in terms of parsimonious architecture and generalization accuracy. Innovative applications to nonlinear time-series modeling demonstrate superior identification results. PMID:25291736

18. PRINCIPAL COMPONENTS ANALYSIS AND PARTIAL LEAST SQUARES REGRESSION

EPA Science Inventory

The mathematics behind the techniques of principal component analysis and partial least squares regression is presented in detail, starting from the appropriate extreme conditions. he meaning of the resultant vectors and many of their mathematical interrelationships are also pres...

19. A Comparison of the Method of Least Squares and the Method of Averages. Classroom Notes

ERIC Educational Resources Information Center

Glaister, P.

2004-01-01

Two techniques for determining a straight line fit to data are compared. This article reviews two simple techniques for fitting a straight line to a set of data, namely the method of averages and the method of least squares. These methods are compared by showing the results of a simple analysis, together with a number of tests based on randomized…

20. Measurement and fitting techniques for the assessment of material nonlinearity using nonlinear Rayleigh waves

SciTech Connect

Torello, David; Kim, Jin-Yeon; Qu, Jianmin; Jacobs, Laurence J.

2015-03-31

This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. These experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.

1. Measurement and fitting techniques for the assessment of material nonlinearity using nonlinear Rayleigh waves

Torello, David; Kim, Jin-Yeon; Qu, Jianmin; Jacobs, Laurence J.

2015-03-01

This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β11 is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. These experiments are conducted on aluminum 2024 and 7075 specimens and a β117075/β112024 measure of 1.363 agrees well with previous literature and earlier work.

2. A Least-Squares Transport Equation Compatible with Voids

SciTech Connect

Hansen, Jon; Peterson, Jacob; Morel, Jim; Ragusa, Jean; Wang, Yaqi

2014-12-01

Standard second-order self-adjoint forms of the transport equation, such as the even-parity, odd-parity, and self-adjoint angular flux equation, cannot be used in voids. Perhaps more important, they experience numerical convergence difficulties in near-voids. Here we present a new form of a second-order self-adjoint transport equation that has an advantage relative to standard forms in that it can be used in voids or near-voids. Our equation is closely related to the standard least-squares form of the transport equation with both equations being applicable in a void and having a nonconservative analytic form. However, unlike the standard least-squares form of the transport equation, our least-squares equation is compatible with source iteration. It has been found that the standard least-squares form of the transport equation with a linear-continuous finite-element spatial discretization has difficulty in the thick diffusion limit. Here we extensively test the 1D slab-geometry version of our scheme with respect to void solutions, spatial convergence rate, and the intermediate and thick diffusion limits. We also define an effective diffusion synthetic acceleration scheme for our discretization. Our conclusion is that our least-squares Sn formulation represents an excellent alternative to existing second-order Sn transport formulations

3. Application of the Polynomial-Based Least Squares and Total Least Squares Models for the Attenuated Total Reflection Fourier Transform Infrared Spectra of Binary Mixtures of Hydroxyl Compounds.

PubMed

Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang

2016-03-01

An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS. PMID:26810185

4. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

NASA Technical Reports Server (NTRS)

Chamis, Christos C.; Coroneos, Rula M.

2007-01-01

Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

5. Regularized total least squares approach for nonconvolutional linear inverse problems.

PubMed

Zhu, W; Wang, Y; Galatsanos, N P; Zhang, J

1999-01-01

In this correspondence, a solution is developed for the regularized total least squares (RTLS) estimate in linear inverse problems where the linear operator is nonconvolutional. Our approach is based on a Rayleigh quotient (RQ) formulation of the TLS problem, and we accomplish regularization by modifying the RQ function to enforce a smooth solution. A conjugate gradient algorithm is used to minimize the modified RQ function. As an example, the proposed approach has been applied to the perturbation equation encountered in optical tomography. Simulation results show that this method provides more stable and accurate solutions than the regularized least squares and a previously reported total least squares approach, also based on the RQ formulation. PMID:18267442

6. A note on the limitations of lattice least squares

NASA Technical Reports Server (NTRS)

Gillis, J. T.; Gustafson, C. L.; Mcgraw, G. A.

1988-01-01

This paper quantifies the known limitation of lattice least squares to ARX models in terms of the dynamic properties of the system being modeled. This allows determination of the applicability of lattice least squares in a given situation. The central result is that an equivalent ARX model exists for an ARMAX system if and only if the ARMAX system has no transmission zeros from the noise port to the output port. The technique used to prove this fact is a construction using the matrix fractional description of the system. The final section presents two computational examples.

7. Least-Squares Approximation of an Improper Correlation Matrix by a Proper One.

ERIC Educational Resources Information Center

Knol, Dirk L.; ten Berge, Jos M. F.

1989-01-01

An algorithm, based on a solution for C. I. Mosier's oblique Procrustes rotation problem, is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. Results are of interest for missing value and tetrachoric correlation, indefinite matrix correlation, and constrained…

8. Analysis of total least squares in estimating the parameters of a mortar trajectory

SciTech Connect

Lau, D.L.; Ng, L.C.

1994-12-01

Least Squares (LS) is a method of curve fitting used with the assumption that error exists in the observation vector. The method of Total Least Squares (TLS) is more useful in cases where there is error in the data matrix as well as the observation vector. This paper describes work done in comparing the LS and TLS results for parameter estimation of a mortar trajectory based on a time series of angular observations. To improve the results, we investigated several derivations of the LS and TLS methods, and early findings show TLS provided slightly, 10%, improved results over the LS method.

9. Least squares approximation of two-dimensional FIR digital filters

Alliney, S.; Sgallari, F.

1980-02-01

In this paper, a new method for the synthesis of two-dimensional FIR digital filters is presented. The method is based on a least-squares approximation of the ideal frequency response; an orthogonality property of certain functions, related to the frequency sampling design, improves the computational efficiency.

10. SAS Partial Least Squares (PLS) for Discriminant Analysis

Technology Transfer Automated Retrieval System (TEKTRAN)

The objective of this work was to implement discriminant analysis using SAS partial least squares (PLS) regression for analysis of spectral data. This was done in combination with previous efforts which implemented data pre-treatments including scatter correction, derivatives, mean centering, and v...

11. On the Routh approximation technique and least squares errors

NASA Technical Reports Server (NTRS)

Aburdene, M. F.; Singh, R.-N. P.

1979-01-01

A new method for calculating the coefficients of the numerator polynomial of the direct Routh approximation method (DRAM) using the least square error criterion is formulated. The necessary conditions have been obtained in terms of algebraic equations. The method is useful for low frequency as well as high frequency reduced-order models.

12. Least-Squares Adaptive Control Using Chebyshev Orthogonal Polynomials

NASA Technical Reports Server (NTRS)

Nguyen, Nhan T.; Burken, John; Ishihara, Abraham

2011-01-01

This paper presents a new adaptive control approach using Chebyshev orthogonal polynomials as basis functions in a least-squares functional approximation. The use of orthogonal basis functions improves the function approximation significantly and enables better convergence of parameter estimates. Flight control simulations demonstrate the effectiveness of the proposed adaptive control approach.

13. Weighted discrete least-squares polynomial approximation using randomized quadratures

Zhou, Tao; Narayan, Akil; Xiu, Dongbin

2015-10-01

We discuss the problem of polynomial approximation of multivariate functions using discrete least squares collocation. The problem stems from uncertainty quantification (UQ), where the independent variables of the functions are random variables with specified probability measure. We propose to construct the least squares approximation on points randomly and uniformly sampled from tensor product Gaussian quadrature points. We analyze the stability properties of this method and prove that the method is asymptotically stable, provided that the number of points scales linearly (up to a logarithmic factor) with the cardinality of the polynomial space. Specific results in both bounded and unbounded domains are obtained, along with a convergence result for Chebyshev measure. Numerical examples are provided to verify the theoretical results.

14. Assessment of weighted-least-squares-based gas path analysis

Doel, D. L.

1994-04-01

Manufacturers of gas turbines have searched for three decades for a reliable way to use gas path measurements to determine the health of jet engine components. They have been hindered in this pursuit by the quality of the measurements used to carry out the analysis. Engine manufacturers have chosen weighted-least-squares techniques to reduce the inaccuracy caused by sensor error. While these algorithms are clearly an improvement over the previous generation of gas path analysis programs, they still fail in many situations. This paper describes some of the failures and explores their relationship to the underlying analysis technique. It also describes difficulties in implementing a gas path analysis program. The paper concludes with an appraisal of weighted-least-squares-based gas path analysis.

15. Anisotropy minimization via least squares method for transformation optics.

PubMed

Junqueira, Mateus A F C; Gabrielli, Lucas H; Spadoti, Danilo H

2014-07-28

In this work the least squares method is used to reduce anisotropy in transformation optics technique. To apply the least squares method a power series is added on the coordinate transformation functions. The series coefficients were calculated to reduce the deviations in Cauchy-Riemann equations, which, when satisfied, result in both conformal transformations and isotropic media. We also present a mathematical treatment for the special case of transformation optics to design waveguides. To demonstrate the proposed technique a waveguide with a 30° of bend and with a 50% of increase in its output width was designed. The results show that our technique is simultaneously straightforward to be implement and effective in reducing the anisotropy of the transformation for an extremely low value close to zero. PMID:25089468

16. Least-squares estimation of batch culture kinetic parameters.

PubMed

Ong, S L

1983-10-01

This article concerns the development of a simple and effective least-squares procedure for estimating the kinetic parameters in Monod expressions from batch culture data. The basic approach employed in this work was to translate the problem of parameter estimation to a mathematical model containing a single decision variable. The resulting model was then solved by an efficient one-dimensional search algorithm which can be adapted to any microcomputer or advanced programmable calculator. The procedure was tested on synthetic data (substrate concentrations) with different types and levels of error. The effect of endogeneous respiration on the estimated values of the kinetic parameters was also assessed. From the results of these analyses the least-squares procedure developed was concluded to be very effective. PMID:18548565

17. Speckle reduction by phase-based weighted least squares.

PubMed

Zhu, Lei; Wang, Weiming; Qin, Jing; Heng, Pheng-Ann

2014-01-01

Although ultrasonography has been widely used in clinical applications, the doctor suffers great difficulties in diagnosis due to the artifacts of ultrasound images, especially the speckle noise. This paper proposes a novel framework for speckle reduction by using a phase-based weighted least squares optimization. The proposed approach can effectively smooth out speckle noise while preserving the features in the image, e.g., edges with different contrasts. To this end, we first employ a local phase-based measure, which is theoretically intensity-invariant, to extract the edge map from the input image. The edge map is then incorporated into the weighted least squares framework to supervise the optimization during despeckling, so that low contrast edges can be retained while the noise has been greatly removed. Experimental results in synthetic and clinical ultrasound images demonstrate that our approach performs better than state-of-the-art methods. PMID:25570846

18. Least-squares finite element methods for quantum chromodynamics

SciTech Connect

Ketelsen, Christian; Brannick, J; Manteuffel, T; Mccormick, S

2008-01-01

A significant amount of the computational time in large Monte Carlo simulations of lattice quantum chromodynamics (QCD) is spent inverting the discrete Dirac operator. Unfortunately, traditional covariant finite difference discretizations of the Dirac operator present serious challenges for standard iterative methods. For interesting physical parameters, the discretized operator is large and ill-conditioned, and has random coefficients. More recently, adaptive algebraic multigrid (AMG) methods have been shown to be effective preconditioners for Wilson's discretization of the Dirac equation. This paper presents an alternate discretization of the Dirac operator based on least-squares finite elements. The discretization is systematically developed and physical properties of the resulting matrix system are discussed. Finally, numerical experiments are presented that demonstrate the effectiveness of adaptive smoothed aggregation ({alpha}SA ) multigrid as a preconditioner for the discrete field equations resulting from applying the proposed least-squares FE formulation to a simplified test problem, the 2d Schwinger model of quantum electrodynamics.

19. Generalized Least Squares Estimators in the Analysis of Covariance Structures.

ERIC Educational Resources Information Center

Browne, Michael W.

This paper concerns situations in which a p x p covariance matrix is a function of an unknown q x 1 parameter vector y-sub-o. Notation is defined in the second section, and some algebraic results used in subsequent sections are given. Section 3 deals with asymptotic properties of generalized least squares (G.L.S.) estimators of y-sub-o. Section 4…

20. Least-squares finite element method for fluid dynamics

NASA Technical Reports Server (NTRS)

Jiang, Bo-Nan; Povinelli, Louis A.

1989-01-01

An overview is given of new developments of the least squares finite element method (LSFEM) in fluid dynamics. Special emphasis is placed on the universality of LSFEM; the symmetry and positiveness of the algebraic systems obtained from LSFEM; the accommodation of LSFEM to equal order interpolations for incompressible viscous flows; and the natural numerical dissipation of LSFEM for convective transport problems and high speed compressible flows. The performance of LSFEM is illustrated by numerical examples.

1. A new least-squares transport equation compatible with voids

SciTech Connect

Hansen, J. B.; Morel, J. E.

2013-07-01

We define a new least-squares transport equation that is applicable in voids, can be solved using source iteration with diffusion-synthetic acceleration, and requires only the solution of an independent set of second-order self-adjoint equations for each direction during each source iteration. We derive the equation, discretize it using the S{sub n} method in conjunction with a linear-continuous finite-element method in space, and computationally demonstrate various of its properties. (authors)

2. Compressible flow calculations employing the Galerkin/least-squares method

NASA Technical Reports Server (NTRS)

Shakib, F.; Hughes, T. J. R.; Johan, Zdenek

1989-01-01

A multielement group, domain decomposition algorithm is presented for solving linear nonsymmetric systems arising in the finite-element analysis of compressible flows employing the Galerkin/least-squares method. The iterative strategy employed is based on the generalized minimum residual (GMRES) procedure originally proposed by Saad and Shultz. Two levels of preconditioning are investigated. Applications to problems of high-speed compressible flow illustrate the effectiveness of the scheme.

3. Least squares restoration of multi-channel images

NASA Technical Reports Server (NTRS)

Chin, Roland T.; Galatsanos, Nikolas P.

1989-01-01

In this paper, a least squares filter for the restoration of multichannel imagery is presented. The restoration filter is based on a linear, space-invariant imaging model and makes use of an iterative matrix inversion algorithm. The restoration utilizes both within-channel (spatial) and cross-channel information as constraints. Experiments using color images (three-channel imagery with red, green, and blue components) were performed to evaluate the filter's performance and to compare it with other monochrome and multichannel filters.

4. Recursive least-squares learning algorithms for neural networks

SciTech Connect

Lewis, P.S. ); Hwang, Jenq-Neng . Dept. of Electrical Engineering)

1990-01-01

This paper presents the development of a pair of recursive least squares (RLS) algorithms for online training of multilayer perceptrons, which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation, either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is in the order of (N{sup 2}), where N is the number of network parameters. This is due to the estimation of the N {times} N inverse Hessian matrix. Less computationally intensive approximations of the RLS algorithms can be easily derived by using only block diagonal elements of this matrix, thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example, RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6331). 14 refs., 3 figs.

5. Preprocessing Inconsistent Linear System for a Meaningful Least Squares Solution

NASA Technical Reports Server (NTRS)

Sen, Syamal K.; Shaykhian, Gholam Ali

2011-01-01

Mathematical models of many physical/statistical problems are systems of linear equations. Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.

6. Multilevel first-order system least squares for PDEs

SciTech Connect

McCormick, S.

1994-12-31

The purpose of this talk is to analyze the least-squares finite element method for second-order convection-diffusion equations written as a first-order system. In general, standard Galerkin finite element methods applied to non-self-adjoint elliptic equations with significant convection terms exhibit a variety of deficiencies, including oscillations or nonmonotonicity of the solution and poor approximation of its derivatives, A variety of stabilization techniques, such as up-winding, Petrov-Galerkin, and stream-line diffusion approximations, have been introduced to eliminate these and other drawbacks of standard Galerkin methods. Yet, although significant progress has been made, convection-diffusion problems remain among the more difficult problems to solve numerically. The first-order system least-squares approach promises to overcome these deficiencies. This talk develops ellipticity estimates and discretization error bounds for elliptic equations (with lower order terms) that are reformulated as a least-squares problem for an equivalent first-order system. The main results are the proofs of ellipticity and optimal convergence of multiplicative and additive solvers of the discrete systems.

7. Partial least squares Cox regression for genome-wide data.

PubMed

Nygård, Ståle; Borgan, Ornulf; Lingjaerde, Ole Christian; Størvold, Hege Leite

2008-06-01

Most methods for survival prediction from high-dimensional genomic data combine the Cox proportional hazards model with some technique of dimension reduction, such as partial least squares regression (PLS). Applying PLS to the Cox model is not entirely straightforward, and multiple approaches have been proposed. The method of Park etal. (Bioinformatics 18(Suppl. 1):S120-S127, 2002) uses a reformulation of the Cox likelihood to a Poisson type likelihood, thereby enabling estimation by iteratively reweighted partial least squares for generalized linear models. We propose a modification of the method of Park et al. (2002) such that estimates of the baseline hazard and the gene effects are obtained in separate steps. The resulting method has several advantages over the method of Park et al. (2002) and other existing Cox PLS approaches, as it allows for estimation of survival probabilities for new patients, enables a less memory-demanding estimation procedure, and allows for incorporation of lower-dimensional non-genomic variables like disease grade and tumor thickness. We also propose to combine our Cox PLS method with an initial gene selection step in which genes are ordered by their Cox score and only the highest-ranking k% of the genes are retained, obtaining a so-called supervised partial least squares regression method. In simulations, both the unsupervised and the supervised version outperform other Cox PLS methods. PMID:18188699

8. Solving linear inequalities in a least squares sense

SciTech Connect

Bramley, R.; Winnicka, B.

1994-12-31

Let A {element_of} {Re}{sup mxn} be an arbitrary real matrix, and let b {element_of} {Re}{sup m} a given vector. A familiar problem in computational linear algebra is to solve the system Ax = b in a least squares sense; that is, to find an x* minimizing {parallel}Ax {minus} b{parallel}, where {parallel} {center_dot} {parallel} refers to the vector two-norm. Such an x* solves the normal equations A{sup T}(Ax {minus} b) = 0, and the optimal residual r* = b {minus} Ax* is unique (although x* need not be). The least squares problem is usually interpreted as corresponding to multiple observations, represented by the rows of A and b, on a vector of data x. The observations may be inconsistent, and in this case a solution is sought that minimizes the norm of the residuals. A less familiar problem to numerical linear algebraists is the solution of systems of linear inequalities Ax {le} b in a least squares sense, but the motivation is similar: if a set of observations places upper or lower bounds on linear combinations of variables, the authors want to find x* minimizing {parallel} (Ax {minus} b){sub +} {parallel}, where the i{sup th} component of the vector v{sub +} is the maximum of zero and the i{sup th} component of v.

9. Least-squares framework for projection MRI reconstruction

Gregor, Jens; Rannou, Fernando

2001-07-01

Magnetic resonance signals that have very short relaxation times are conveniently sampled in a spherical fashion. We derive a least squares framework for reconstructing three-dimensional source distribution images from such data. Using a finite-series approach, the image is represented as a weighted sum of translated Kaiser-Bessel window functions. The Radon transform thereof establishes the connection with the projection data that one can obtain from the radial sampling trajectories. The resulting linear system of equations is sparse, but quite large. To reduce the size of the problem, we introduce focus of attention. Based on the theory of support functions, this data-driven preprocessing scheme eliminates equations and unknowns that merely represent the background. The image reconstruction and the focus of attention both require a least squares solution to be computed. We describe a projected gradient approach that facilitates a non-negativity constrained version of the powerful LSQR algorithm. In order to ensure reasonable execution times, the least squares computation can be distributed across a network of PCs and/or workstations. We discuss how to effectively parallelize the NN-LSQR algorithm. We close by presenting results from experimental work that addresses both computational issues and image quality using a mathematical phantom.

10. A note on the total least squares problem for coplanar points

SciTech Connect

Lee, S.L.

1994-09-01

The Total Least Squares (TLS) fit to the points (x{sub k}, y{sub k}), k = 1, {hor_ellipsis}, n, minimizes the sum of the squares of the perpendicular distances from the points to the line. This sum is the TLS error, and minimizing its magnitude is appropriate if x{sub k} and y{sub k} are uncertain. A priori formulas for the TLS fit and TLS error to coplanar points were originally derived by Pearson, and they are expressed in terms of the mean, standard deviation and correlation coefficient of the data. In this note, these TLS formulas are derived in a more elementary fashion. The TLS fit is obtained via the ordinary least squares problem and the algebraic properties of complex numbers. The TLS error is formulated in terms of the triangle inequality for complex numbers.

11. Multivariat least-squares methods applied to the quantitative spectral analysis of multicomponent samples

SciTech Connect

Haaland, D.M.; Easterling, R.G.; Vopicka, D.A.

1985-01-01

In an extension of earlier work, weighted multivariate least-squares methods of quantitative FT-IR analysis have been developed. A linear least-squares approximation to nonlinearities in the Beer-Lambert law is made by allowing the reference spectra to be a set of known mixtures, The incorporation of nonzero intercepts in the relation between absorbance and concentration further improves the approximation of nonlinearities while simultaneously accounting for nonzero spectra baselines. Pathlength variations are also accommodated in the analysis, and under certain conditions, unknown sample pathlengths can be determined. All spectral data are used to improve the precision and accuracy of the estimated concentrations. During the calibration phase of the analysis, pure component spectra are estimated from the standard mixture spectra. These can be compared with the measured pure component spectra to determine which vibrations experience nonlinear behavior. In the predictive phase of the analysis, the calculated spectra are used in our previous least-squares analysis to estimate sample component concentrations. These methods were applied to the analysis of the IR spectra of binary mixtures of esters. Even with severely overlapping spectral bands and nonlinearities in the Beer-Lambert law, the average relative error in the estimated concentration was <1%.

12. Positive Scattering Cross Sections using Constrained Least Squares

SciTech Connect

Dahl, J.A.; Ganapol, B.D.; Morel, J.E.

1999-09-27

A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented.

13. Robust inverse kinematics using damped least squares with dynamic weighting

NASA Technical Reports Server (NTRS)

Schinstock, D. E.; Faddis, T. N.; Greenway, R. B.

1994-01-01

This paper presents a general method for calculating the inverse kinematics with singularity and joint limit robustness for both redundant and non-redundant serial-link manipulators. Damped least squares inverse of the Jacobian is used with dynamic weighting matrices in approximating the solution. This reduces specific joint differential vectors. The algorithm gives an exact solution away from the singularities and joint limits, and an approximate solution at or near the singularities and/or joint limits. The procedure is here implemented for a six d.o.f. teleoperator and a well behaved slave manipulator resulted under teleoperational control.

14. Recursive least squares estimation and Kalman filtering by systolic arrays

NASA Technical Reports Server (NTRS)

Chen, M. J.; Yao, K.

1988-01-01

One of the most promising new directions for high-throughput-rate problems is that based on systolic arrays. In this paper, using the matrix-decomposition approach, a systolic Kalman filter is formulated as a modified square-root information filter consisting of a whitening filter followed by a simple least-squares operation based on the systolic QR algorithm. By proper skewing of the input data, a fully pipelined time and measurement update systolic Kalman filter can be achieved with O(n squared) processing cells, resulting in a system throughput rate of O (n).

15. A semi-implicit finite strain shell algorithm using in-plane strains based on least-squares

Areias, P.; Rabczuk, T.; de Sá, J. César; Natal Jorge, R.

2015-04-01

The use of a semi-implicit algorithm at the constitutive level allows a robust and concise implementation of low-order effective shell elements. We perform a semi-implicit integration in the stress update algorithm for finite strain plasticity: rotation terms (highly nonlinear trigonometric functions) are integrated explicitly and correspond to a change in the (in this case evolving) reference configuration and relative Green-Lagrange strains (quadratic) are used to account for change in the equilibrium configuration implicitly. We parametrize both reference and equilibrium configurations, in contrast with the so-called objective stress integration algorithms which use a common configuration. A finite strain quadrilateral element with least-squares assumed in-plane shear strains (in curvilinear coordinates) and classical transverse shear assumed strains is introduced. It is an alternative to enhanced-assumed-strain (EAS) formulations and, contrary to this, produces an element satisfying ab-initio the Patch test. No additional degrees-of-freedom are present, contrasting with EAS. Least-squares fit allows the derivation of invariant finite strain elements which are both in-plane and out-of-plane shear-locking free and amenable to standardization in commercial codes. Two thickness parameters per node are adopted to reproduce the Poisson effect in bending. Metric components are fully deduced and exact linearization of the shell element is performed. Both isotropic and anisotropic behavior is presented in elasto-plastic and hyperelastic examples.

16. Non linear Least Squares(Levenberg-Marquardt algorithms) for geodetic adjustment and coordinates transformation.

Kheloufi, N.; Kahlouche, S.; Lamara, R. Ait Ahmed

2009-04-01

The resolution of the MRE's (Multiple Regression Equations) is an important tool for fitting different geodetic network. Nevertheless, in different fields of engineering and earth science, certain cases need more accuracy; the ordinary least squares (linear least squares) prove to be limited. Thus, we have to use new numerical methods of resolution that can provide a great efficiency of polynomial modelisation. In geodesy the accuracy of coordinates determination and network adjustment is very important, that's why instead of being limited to the linear models, we have to apply the non linear least squares resolution for the transformation problem between geodetic systems. This need, appears especially in the case of Nord-Sahara datum (Algeria), on wich the linear models are not much appropriate, because of the lack of information about the geoid's undulation. In this paper, we have fixed as main aim, to carry out the importance of using non linear least squares to improve the quality of geodetic adjustment and coordinates transformation and even the extent of his use. The algorithms carried out concerned the application of two models: three dimensions (global transformation) and the two-dimension one (local transformation) over huge area (Algeria). We compute coordinates transformation parameters and their Rms by both of the ordinary least squares and new algorithms, then we perform a statistical analysis in order to compare on the one hand between the linear adjustment with its two variants (local and global) and the non linear one. In this context, a set of 16 benchmark, have been integrated to compute the transformation parameters (3D and 2D). Different non linear optimization algorithms (Newton algorithm, Steepest Descent, and Levenberg-Marquardt) have been implemented to solve transformation problem. Conclusions and recommendations are given with respect to the suitability, accuracy and efficiency of each method. Key words: MRE's, Nord Sahara, global

17. EFFICIENCY OF LEAST SQUARES ESTIMATORS IN THE PRESENCE OF SPATIAL AUTOCORRELATION

EPA Science Inventory

The authors consider the effect of spatial autocorrelation on inferences made using ordinary least squares estimation. it is found, in some cares, that ordinary least squares estimators provide a reasonable alternative to the estimated ' generalized least squares estimators recom...

18. Single Object Tracking With Fuzzy Least Squares Support Vector Machine.

PubMed

Zhang, Shunli; Zhao, Sicong; Sui, Yao; Zhang, Li

2015-12-01

Single object tracking, in which a target is often initialized manually in the first frame and then is tracked and located automatically in the subsequent frames, is a hot topic in computer vision. The traditional tracking-by-detection framework, which often formulates tracking as a binary classification problem, has been widely applied and achieved great success in single object tracking. However, there are some potential issues in this formulation. For instance, the boundary between the positive and negative training samples is fuzzy, and the objectives of tracking and classification are inconsistent. In this paper, we attempt to address the above issues from the fuzzy system perspective and propose a novel tracking method by formulating tracking as a fuzzy classification problem. First, we introduce the fuzzy strategy into tracking and propose a novel fuzzy tracking framework, which can measure the importance of the training samples by assigning different memberships to them and offer more strict spatial constraints. Second, we develop a fuzzy least squares support vector machine (FLS-SVM) approach and employ it to implement a concrete tracker. In particular, the primal form, dual form, and kernel form of FLS-SVM are analyzed and the corresponding closed-form solutions are derived for efficient realizations. Besides, a least squares regression model is built to control the update adaptively, retaining the robustness of the appearance model. The experimental results demonstrate that our method can achieve comparable or superior performance to many state-of-the-art methods. PMID:26441419

19. Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression

PubMed Central

Chen, Yanguang

2016-01-01

In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson’s statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran’s index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China’s regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test. PMID:26800271

20. Application of Least-Squares Adjustment Technique to Geometric Camera Calibration and Photogrammetric Flow Visualization

NASA Technical Reports Server (NTRS)

Chen, Fang-Jenq

1997-01-01

Flow visualization produces data in the form of two-dimensional images. If the optical components of a camera system are perfect, the transformation equations between the two-dimensional image and the three-dimensional object space are linear and easy to solve. However, real camera lenses introduce nonlinear distortions that affect the accuracy of transformation unless proper corrections are applied. An iterative least-squares adjustment algorithm is developed to solve the nonlinear transformation equations incorporated with distortion corrections. Experimental applications demonstrate that a relative precision on the order of 40,000 is achievable without tedious laboratory calibrations of the camera.

1. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

DOEpatents

Keenan, Michael R.

2008-12-30

Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

2. Evaluation of fatty proportion in fatty liver using least squares method with constraints.

PubMed

Li, Xingsong; Deng, Yinhui; Yu, Jinhua; Wang, Yuanyuan; Shamdasani, Vijay

2014-01-01

Backscatter and attenuation parameters are not easily measured in clinical applications due to tissue inhomogeneity in the region of interest (ROI). A least squares method(LSM) that fits the echo signal power spectra from a ROI to a 3-parameter tissue model was used to get attenuation coefficient imaging in fatty liver. Since fat's attenuation value is higher than normal liver parenchyma, a reasonable threshold was chosen to evaluate the fatty proportion in fatty liver. Experimental results using clinical data of fatty liver illustrate that the least squares method can get accurate attenuation estimates. It is proved that the attenuation values have a positive correlation with the fatty proportion, which can be used to evaluate the syndrome of fatty liver. PMID:25226986

3. Near-least-squares radio frequency interference suppression

Miller, Timothy R.; McCorkle, John W.; Potter, Lee C.

1995-06-01

We present an algorithm for the removal of narrow-band interference from wideband signals. We apply the algorithm to suppress radio frequency interference encountered by ultra- wideband synthetic aperture radar systems used for foliage- and ground-penetrating imaging. For this application, we seek maximal reduction of interference energy, minimal loss and distortion of wideband target responses, and real-time implementation. To balance these competing objectives, we exploit prior information concerning the interference environment in designing an estimate-and-subtract-estimation algorithm. The use of prior knowledge allows fast, near-least-squares estimation of the interference and permits iterative target signature excision in the interference estimation procedure to decrease estimation bias. The results is greater interference suppression, less target signature loss and distortion, and faster computation than is provided by existing techniques.

4. Intelligent Quality Prediction Using Weighted Least Square Support Vector Regression

Yu, Yaojun

A novel quality prediction method with mobile time window is proposed for small-batch producing process based on weighted least squares support vector regression (LS-SVR). The design steps and learning algorithm are also addressed. In the method, weighted LS-SVR is taken as the intelligent kernel, with which the small-batch learning is solved well and the nearer sample is set a larger weight, while the farther is set the smaller weight in the history data. A typical machining process of cutting bearing outer race is carried out and the real measured data are used to contrast experiment. The experimental results demonstrate that the prediction accuracy of the weighted LS-SVR based model is only 20%-30% that of the standard LS-SVR based one in the same condition. It provides a better candidate for quality prediction of small-batch producing process.

5. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

NASA Technical Reports Server (NTRS)

Grauer, Jared A.; Morelli, Eugene A.

2016-01-01

A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

6. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

Wang, Qiqi; Hu, Rui; Blonigan, Patrick

2014-06-01

The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned "least squares shadowing (LSS) problem". The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

7. Flow Applications of the Least Squares Finite Element Method

NASA Technical Reports Server (NTRS)

Jiang, Bo-Nan

1998-01-01

The main thrust of the effort has been towards the development, analysis and implementation of the least-squares finite element method (LSFEM) for fluid dynamics and electromagnetics applications. In the past year, there were four major accomplishments: 1) special treatments in computational fluid dynamics and computational electromagnetics, such as upwinding, numerical dissipation, staggered grid, non-equal order elements, operator splitting and preconditioning, edge elements, and vector potential are unnecessary; 2) the analysis of the LSFEM for most partial differential equations can be based on the bounded inverse theorem; 3) the finite difference and finite volume algorithms solve only two Maxwell equations and ignore the divergence equations; and 4) the first numerical simulation of three-dimensional Marangoni-Benard convection was performed using the LSFEM.

8. Random errors in interferometry with the least-squares method

SciTech Connect

Wang Qi

2011-01-20

This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships have also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.

9. A Galerkin least squares approach to viscoelastic flow.

SciTech Connect

Rao, Rekha R.; Schunk, Peter Randall

2015-10-01

A Galerkin/least-squares stabilization technique is applied to a discrete Elastic Viscous Stress Splitting formulation of for viscoelastic flow. From this, a possible viscoelastic stabilization method is proposed. This method is tested with the flow of an Oldroyd-B fluid past a rigid cylinder, where it is found to produce inaccurate drag coefficients. Furthermore, it fails for relatively low Weissenberg number indicating it is not suited for use as a general algorithm. In addition, a decoupled approach is used as a way separating the constitutive equation from the rest of the system. A Pressure Poisson equation is used when the velocity and pressure are sought to be decoupled, but this fails to produce a solution when inflow/outflow boundaries are considered. However, a coupled pressure-velocity equation with a decoupled constitutive equation is successful for the flow past a rigid cylinder and seems to be suitable as a general-use algorithm.

10. Recursive least square vehicle mass estimation based on acceleration partition

Feng, Yuan; Xiong, Lu; Yu, Zhuoping; Qu, Tong

2014-05-01

Vehicle mass is an important parameter in vehicle dynamics control systems. Although many algorithms have been developed for the estimation of mass, none of them have yet taken into account the different types of resistance that occur under different conditions. This paper proposes a vehicle mass estimator. The estimator incorporates road gradient information in the longitudinal accelerometer signal, and it removes the road grade from the longitudinal dynamics of the vehicle. Then, two different recursive least square method (RLSM) schemes are proposed to estimate the driving resistance and the mass independently based on the acceleration partition under different conditions. A 6 DOF dynamic model of four In-wheel Motor Vehicle is built to assist in the design of the algorithm and in the setting of the parameters. The acceleration limits are determined to not only reduce the estimated error but also ensure enough data for the resistance estimation and mass estimation in some critical situations. The modification of the algorithm is also discussed to improve the result of the mass estimation. Experiment data on a sphalt road, plastic runway, and gravel road and on sloping roads are used to validate the estimation algorithm. The adaptability of the algorithm is improved by using data collected under several critical operating conditions. The experimental results show the error of the estimation process to be within 2.6%, which indicates that the algorithm can estimate mass with great accuracy regardless of the road surface and gradient changes and that it may be valuable in engineering applications. This paper proposes a recursive least square vehicle mass estimation method based on acceleration partition.

11. A comparison of three additive tree algorithms that rely on a least-squares loss criterion.

PubMed

Smith, T J

1998-11-01

The performances of three additive tree algorithms which seek to minimize a least-squares loss criterion were compared. The algorithms included the penalty-function approach of De Soete (1983), the iterative projection strategy of Hubert & Arabie (1995) and the two-stage ADDTREE algorithm, (Corter, 1982; Sattath & Tversky, 1977). Model fit, comparability of structure, processing time and metric recovery were assessed. Results indicated that the iterative projection strategy consistently located the best-fitting tree, but also displayed a wider range and larger number of local optima. PMID:9854946

12. Weighted least-squares algorithm for phase unwrapping based on confidence level in frequency domain

Wang, Shaohua; Yu, Jie; Yang, Cankun; Jiao, Shuai; Fan, Jun; Wan, Yanyan

2015-12-01

Phase unwrapping is a key step in InSAR (Synthetic Aperture Radar Interferometry) processing, and its result may directly affect the accuracy of DEM (Digital Elevation Model) and ground deformation. However, the decoherence phenomenon such as shadows and layover, in the area of severe land subsidence where the terrain is steep and the slope changes greatly, will cause error transmission in the differential wrapped phase information, leading to inaccurate unwrapping phase. In order to eliminate the effect of the noise and reduce the effect of less sampling which caused by topographical factors, a weighted least-squares method based on confidence level in frequency domain is used in this study. This method considered to express the terrain slope in the interferogram as the partial phase frequency in range and azimuth direction, then integrated them into the confidence level. The parameter was used as the constraints of the nonlinear least squares phase unwrapping algorithm, to smooth the un-requirements unwrapped phase gradient and improve the accuracy of phase unwrapping. Finally, comparing with interferometric data of the Beijing subsidence area obtained from TerraSAR verifies that the algorithm has higher accuracy and stability than the normal weighted least-square phase unwrapping algorithms, and could consider to terrain factors.

13. Analysis and computation of a least-squares method for consistent mesh tying

Day, David; Bochev, Pavel

2008-08-01

In the finite element method, a standard approach to mesh tying is to apply Lagrange multipliers. If the interface is curved, however, discretization generally leads to adjoining surfaces that do not coincide spatially. Straightforward Lagrange multiplier methods lead to discrete formulations failing a first-order patch test [T.A. Laursen, M.W. Heinstein, Consistent mesh-tying methods for topologically distinct discretized surfaces in non-linear solid mechanics, Internat. J. Numer. Methods Eng. 57 (2003) 1197-1242]. This paper presents a theoretical and computational study of a least-squares method for mesh tying [P. Bochev, D.M. Day, A least-squares method for consistent mesh tying, Internat. J. Numer. Anal. Modeling 4 (2007) 342-352], applied to the partial differential equation -[backward difference]2[phi]+[alpha][phi]=f. We prove optimal convergence rates for domains represented as overlapping subdomains and show that the least-squares method passes a patch test of the order of the finite element space by construction. To apply the method to subdomain configurations with gaps and overlaps we use interface perturbations to eliminate the gaps. Theoretical error estimates are illustrated by numerical experiments.

14. A least-squares computational tool kit. Nuclear data and measurements series

SciTech Connect

Smith, D.L.

1993-04-01

The information assembled in this report is intended to offer a useful computational tool kit to individuals who are interested in a variety of practical applications for the least-squares method of parameter estimation. The fundamental principles of Bayesian analysis are outlined first and these are applied to development of both the simple and the generalized least-squares conditions. Formal solutions that satisfy these conditions are given subsequently. Their application to both linear and non-linear problems is described in detail. Numerical procedures required to implement these formal solutions are discussed and two utility computer algorithms are offered for this purpose (codes LSIOD and GLSIOD written in FORTRAN). Some simple, easily understood examples are included to illustrate the use of these algorithms. Several related topics are then addressed, including the generation of covariance matrices, the role of iteration in applications of least-squares procedures, the effects of numerical precision and an approach that can be pursued in developing data analysis packages that are directed toward special applications.

15. Least-Squares Neutron Spectral Adjustment with STAYSL PNNL

Greenwood, L. R.; Johnson, C. D.

2016-02-01

The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002) [3], which was sponsored by the International Atomic Energy Agency (IAEA), though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF) [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up) within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the SigPhi Calculator

16. Non-parametric and least squares Langley plot methods

Kiedron, P. W.; Michalsky, J. J.

2015-04-01

Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V=V>/i>0e-τ ·m, where a plot of ln (V) voltage vs. m air mass yields a straight line with intercept ln (V0). This ln (V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The eleven techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln (V0)'s are smoothed and interpolated with median and mean moving window filters.

17. Suppressing Anomalous Localized Waffle Behavior in Least Squares Wavefront Reconstructors

SciTech Connect

Gavel, D

2002-10-08

A major difficulty with wavefront slope sensors is their insensitivity to certain phase aberration patterns, the classic example being the waffle pattern in the Fried sampling geometry. As the number of degrees of freedom in AO systems grows larger, the possibility of troublesome waffle-like behavior over localized portions of the aperture is becoming evident. Reconstructor matrices have associated with them, either explicitly or implicitly, an orthogonal mode space over which they operate, called the singular mode space. If not properly preconditioned, the reconstructor's mode set can consist almost entirely of modes that each have some localized waffle-like behavior. In this paper we analyze the behavior of least-squares reconstructors with regard to their mode spaces. We introduce a new technique that is successful in producing a mode space that segregates the waffle-like behavior into a few ''high order'' modes, which can then be projected out of the reconstructor matrix. This technique can be adapted so as to remove any specific modes that are undesirable in the final reconstructor (such as piston, tip, and tilt for example) as well as suppress (the more nebulously defined) localized waffle behavior.

18. Battery state-of-charge estimation using approximate least squares

Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.

2015-03-01

In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.

19. Curve-skeleton extraction using iterative least squares optimization.

PubMed

Wang, Yu-Shuen; Lee, Tong-Yee

2008-01-01

A curve skeleton is a compact representation of 3D objects and has numerous applications. It can be used to describe an object's geometry and topology. In this paper, we introduce a novel approach for computing curve skeletons for volumetric representations of the input models. Our algorithm consists of three major steps: 1) using iterative least squares optimization to shrink models and, at the same time, preserving their geometries and topologies, 2) extracting curve skeletons through the thinning algorithm, and 3) pruning unnecessary branches based on shrinking ratios. The proposed method is less sensitive to noise on the surface of models and can generate smoother skeletons. In addition, our shrinking algorithm requires little computation, since the optimization system can be factorized and stored in the pre-computational step. We demonstrate several extracted skeletons that help evaluate our algorithm. We also experimentally compare the proposed method with other well-known methods. Experimental results show advantages when using our method over other techniques. PMID:18467765

20. Least-squares solution of ill-conditioned systems. II

Branham, R. L., Jr.

1980-11-01

A singular-value analysis of normal equations from observations of minor planets 6 (Hebe), 7 (Iris), 8 (Flora), 9 (Metis), and 15 (Eunomia) is undertaken to determine corrections to a number of astronomical parameters, particularly the equinox correction for the FK4. In a previous investigation the test for small singular values was criticized because it resulted in discordant equinox determinations. Here it is shown that none of the tests employed by singular-value analysis leads to solutions superior to those given by classical least squares. It is concluded that singular-value analysis has legitimate uses in astronomy, but that it is misapplied when employed to estimate astronomical parameters in a well defined model. Also discussed is the question of whether it is preferable to reduce the equations of condition by orthogonal transformations rather than to form normal equations. Some suggestions are made regarding the desirability of planning observational programs in such a way that the observations do not lead to extremely ill-conditioned systems.

1. A recursive least squares-based demodulator for electrical tomography

Xu, Lijun; Zhou, Haili; Cao, Zhang

2013-04-01

In this paper, a recursive least squares (RLS)-based demodulator is proposed for Electrical Tomography (ET) that employs sinusoidal excitation. The new demodulator can output preliminary demodulation results on amplitude and phase of a sinusoidal signal by processing the first two sampling data, and the demodulation precision and signal-to-noise ratio can be further improved by involving more sampling data in a recursive way. Thus trade-off between the speed and precision in demodulation of electrical parameters can be flexibly made according to specific requirement of an ET system. The RLS-based demodulator is suitable to be implemented in a field programmable gate array (FPGA). Numerical simulation was carried out to prove its feasibility and optimize the relevant parameters for hardware implementation, e.g., the precision of the fixed-point parameters, sampling rate, and resolution of the analog to digital convertor. A FPGA-based capacitance measurement circuit for electrical capacitance tomography was constructed to implement and validate the RLS-based demodulator. Both simulation and experimental results demonstrate that the proposed demodulator is valid and capable of making trade-off between demodulation speed and precision and brings more flexibility to the hardware design of ET systems.

2. Fast frequency acquisition via adaptive least squares algorithm

NASA Technical Reports Server (NTRS)

Kumar, R.

1986-01-01

A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.

3. Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares

NASA Technical Reports Server (NTRS)

Orr, Jeb S.

2012-01-01

A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed

4. Non-parametric and least squares Langley plot methods

Kiedron, P. W.; Michalsky, J. J.

2016-01-01

Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V = V0e-τ ṡ m, where a plot of ln(V) voltage vs. m air mass yields a straight line with intercept ln(V0). This ln(V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The 11 techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln(V0)'s are smoothed and interpolated with median and mean moving window filters.

5. A duct mapping method using least squares support vector machines

Douvenot, RéMi; Fabbro, Vincent; Gerstoft, Peter; Bourlier, Christophe; Saillard, Joseph

2008-12-01

This paper introduces a "refractivity from clutter" (RFC) approach with an inversion method based on a pregenerated database. The RFC method exploits the information contained in the radar sea clutter return to estimate the refractive index profile. Whereas initial efforts are based on algorithms giving a good accuracy involving high computational needs, the present method is based on a learning machine algorithm in order to obtain a real-time system. This paper shows the feasibility of a RFC technique based on the least squares support vector machine inversion method by comparing it to a genetic algorithm on simulated and noise-free data, at 1 and 5 GHz. These data are simulated in the presence of ideal trilinear surface-based ducts. The learning machine is based on a pregenerated database computed using Latin hypercube sampling to improve the efficiency of the learning. The results show that little accuracy is lost compared to a genetic algorithm approach. The computational time of a genetic algorithm is very high, whereas the learning machine approach is real time. The advantage of a real-time RFC system is that it could work on several azimuths in near real time.

6. Robustness of ordinary least squares in randomized clinical trials.

PubMed

Judkins, David R; Porter, Kristin E

2016-05-20

There has been a series of occasional papers in this journal about semiparametric methods for robust covariate control in the analysis of clinical trials. These methods are fairly easy to apply on currently available computers, but standard software packages do not yet support these methods with easy option selections. Moreover, these methods can be difficult to explain to practitioners who have only a basic statistical education. There is also a somewhat neglected history demonstrating that ordinary least squares (OLS) is very robust to the types of outcome distribution features that have motivated the newer methods for robust covariate control. We review these two strands of literature and report on some new simulations that demonstrate the robustness of OLS to more extreme normality violations than previously explored. The new simulations involve two strongly leptokurtic outcomes: near-zero binary outcomes and zero-inflated gamma outcomes. Potential examples of such outcomes include, respectively, 5-year survival rates for stage IV cancer and healthcare claim amounts for rare conditions. We find that traditional OLS methods work very well down to very small sample sizes for such outcomes. Under some circumstances, OLS with robust standard errors work well with even smaller sample sizes. Given this literature review and our new simulations, we think that most researchers may comfortably continue using standard OLS software, preferably with the robust standard errors. PMID:26694758

7. Improving the gradient in least-squares reverse time migration

Liu, Qiancheng

2016-04-01

Least-squares reverse time migration (LSRTM) is a linearized inversion technique used for estimating high-wavenumber reflectivity. However, due to the redundant overlay of the band-limited source wavelet, the gradient based on the cross-correlated imaging principle suffers from a loss of wavenumber information. We first prepare the residuals between observed and demigrated data by deconvolving with the amplitude spectrum of the source wavelet, and then migrate the preprocessed residuals by using the cross-correlation imaging principle. In this way, a gradient that preserves the spectral signature of data residuals is obtained. The computational cost of source-wavelet removal is negligible compared to that of wavefield simulation. The two-dimensional Marmousi model containing complex geology structures is considered to test our scheme. Numerical examples show that our improved gradient in LSRTM has a better convergence behavior and promises inverted results of higher resolution. Finally, we attempt to update the background velocity with our inverted velocity perturbations to approach the true velocity.

8. Fast Dating Using Least-Squares Criteria and Algorithms

PubMed Central

To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

2016-01-01

Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley–Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley–Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to

9. Fast Dating Using Least-Squares Criteria and Algorithms.

PubMed

To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

2016-01-01

Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that

10. Modified fast frequency acquisition via adaptive least squares algorithm

NASA Technical Reports Server (NTRS)

Kumar, Rajendra (Inventor)

1992-01-01

A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

11. Finding A Minimally Informative Dirichlet Prior Using Least Squares

SciTech Connect

Dana Kelly

2011-03-01

In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson \\lambda, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

12. The moving-least-squares-particle hydrodynamics method (MLSPH)

SciTech Connect

Dilts, G.

1997-12-31

An enhancement of the smooth-particle hydrodynamics (SPH) method has been developed using the moving-least-squares (MLS) interpolants of Lancaster and Salkauskas which simultaneously relieves the method of several well-known undesirable behaviors, including spurious boundary effects, inaccurate strain and rotation rates, pressure spikes at impact boundaries, and the infamous tension instability. The classical SPH method is derived in a novel manner by means of a Galerkin approximation applied to the Lagrangian equations of motion for continua using as basis functions the SPH kernel function multiplied by the particle volume. This derivation is then modified by simply substituting the MLS interpolants for the SPH Galerkin basis, taking care to redefine the particle volume and mass appropriately. The familiar SPH kernel approximation is now equivalent to a colocation-Galerkin method. Both classical conservative and recent non-conservative formulations of SPH can be derived and emulated. The non-conservative forms can be made conservative by adding terms that are zero within the approximation at the expense of boundary-value considerations. The familiar Monaghan viscosity is used. Test calculations of uniformly expanding fluids, the Swegle example, spinning solid disks, impacting bars, and spherically symmetric flow illustrate the superiority of the technique over SPH. In all cases it is seen that the marvelous ability of the MLS interpolants to add up correctly everywhere civilizes the noisy, unpredictable nature of SPH. Being a relatively minor perturbation of the SPH method, it is easily retrofitted into existing SPH codes. On the down side, computational expense at this point is significant, the Monaghan viscosity undoes the contribution of the MLS interpolants, and one-point quadrature (colocation) is not accurate enough. Solutions to these difficulties are being pursued vigorously.

13. Weighted least square estimates of the parameters of a model of survivorship probabilities.

PubMed

Mitra, S

1987-06-01

"A weighted regression has been fitted to estimate the parameters of a model involving functions of survivorship probability and age. Earlier, the parameters were estimated by the method of ordinary least squares and the results were very encouraging. However, a multiple regression equation passing through the origin has been found appropriate for the present model from statistical consideration. Fortunately, this method, while methodologically more sophisticated, has a slight edge over the former as evidenced by the respective measures of reproducibility in the model and actual life tables selected for this study." PMID:12281212

14. Comparing implementations of penalized weighted least-squares sinogram restoration

PubMed Central

Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick

2010-01-01

Purpose: A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. Methods: The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix

15. Comparing implementations of penalized weighted least-squares sinogram restoration

SciTech Connect

Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick

2010-11-15

Purpose: A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. Methods: The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix

16. Data-adapted moving least squares method for 3-D image interpolation

Jang, Sumi; Nam, Haewon; Lee, Yeon Ju; Jeong, Byeongseon; Lee, Rena; Yoon, Jungho

2013-12-01

In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons.

17. FOSLS (first-order systems least squares): An overivew

SciTech Connect

Manteuffel, T.A.

1996-12-31

The process of modeling a physical system involves creating a mathematical model, forming a discrete approximation, and solving the resulting linear or nonlinear system. The mathematical model may take many forms. The particular form chosen may greatly influence the ease and accuracy with which it may be discretized as well as the properties of the resulting linear or nonlinear system. If a model is chosen incorrectly it may yield linear systems with undesirable properties such as nonsymmetry or indefiniteness. On the other hand, if the model is designed with the discretization process and numerical solution in mind, it may be possible to avoid these undesirable properties.

18. The Least Squares Stochastic Finite Element Method in Structural Stability Analysis of Steel Skeletal Structures

Kamiński, M.; Szafran, J.

2015-05-01

The main purpose of this work is to verify the influence of the weighting procedure in the Least Squares Method on the probabilistic moments resulting from the stability analysis of steel skeletal structures. We discuss this issue also in the context of the geometrical nonlinearity appearing in the Stochastic Finite Element Method equations for the stability analysis and preservation of the Gaussian probability density function employed to model the Young modulus of a structural steel in this problem. The weighting procedure itself (with both triangular and Dirac-type) shows rather marginal influence on all probabilistic coefficients under consideration. This hybrid stochastic computational technique consisting of the FEM and computer algebra systems (ROBOT and MAPLE packages) may be used for analogous nonlinear analyses in structural reliability assessment.

19. Probability-based least square support vector regression metamodeling technique for crashworthiness optimization problems

Wang, Hu; Li, Enying; Li, G. Y.

2011-03-01

This paper presents a crashworthiness design optimization method based on a metamodeling technique. The crashworthiness optimization is a highly nonlinear and large scale problem, which is composed various nonlinearities, such as geometry, material and contact and needs a large number expensive evaluations. In order to obtain a robust approximation efficiently, a probability-based least square support vector regression is suggested to construct metamodels by considering structure risk minimization. Further, to save the computational cost, an intelligent sampling strategy is applied to generate sample points at the stage of design of experiment (DOE). In this paper, a cylinder, a full vehicle frontal collision is involved. The results demonstrate that the proposed metamodel-based optimization is efficient and effective in solving crashworthiness, design optimization problems.

20. A Bayesian least squares support vector machines based framework for fault diagnosis and failure prognosis

Khawaja, Taimoor Saleem

A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior

1. New techniques for meshless flow simulation generalizing moving least squares

2015-11-01

While the Lagrangian nature of SPH offers unique flexibility in application problems, practitioners are forced to choose between compatibility in div/grad operators or low accuracy limiting the scope of the method. In this work, two new discretization frameworks are introduced that extend concepts from finite difference methods to a meshless context: one generalizing the high-order convergence of compact finite differences and another generalizing the enhanced stability of staggered marker-and-cell schemes. The discretizations are based on a novel polynomial reconstruction process that allows arbitrary order polynomial accuracy for both the differential operators and general boundary conditions while maintaining stability and computational efficiency. We demonstrate how the method fits neatly into the ISPH framework and offers a new degree of fidelity and accuracy in Lagrangian particle methods. Supported by the Collaboratory on Mathematics for Mesoscopic Modeling of Materials (CM4), DOE Award DE-SC0009247.

2. [Modelling a penicillin fed-batch fermentation using least squares support vector machines].

PubMed

Liu, Yi; Wang, Hai-Qing

2006-01-01

The biochemical processes are usually characterized as seriously time varying and nonlinear dynamic systems. Building their first-principle models are very costly and difficult due to the absence of inherent mechanism and efficient on-line sensors. Furthermore, these detailed and complicated models do not necessary guarantee a good performance in practice. An approach via least squares support vector machines (LS-SVM) based on Pensim simulator is proposed for modelling the penicillin fed-batch fermentation process, and the adjustment strategy for parameters of LS-SVM is presented. Based on the proposed modelling method, the predictive models of penicillin concentration, biomass concentration and substrate concentration are obtained by using very limited on-line measurements. The results show that the models established are more accurate and efficient, and suffice for the requirements of control and optimization for biochemical processes. PMID:16572855

3. Multidimensional least-squares resolution of Raman spectra from intermediates in sensitized photochemical reactions

SciTech Connect

Fister, J.C. III; Harris, J.M.

1995-12-01

Transient resonance Raman spectroscopy is used to elicit reaction kinetics and intermediate spectra from sensitized photochemical reactions. Nonlinear least-squares analysis of Raman spectra of a triplet-state photosensitizer (benzophenone), acquired as a function of laser intensity and/or quencher concentration allow the Raman spectra of the sensitizer excited state and intermediate photoproducts to be resolved from the spectra of the ground state and solvent. In cases where physical models describing the system kinetics cannot be found, factor analysis techniques are used to obtain the intermediate spectra. Raman spectra of triplet state benzophenone and acetophenone, obtained as a function of laser excitation kinetics, and the Raman spectra of intermediates formed by energy transfer (triplet-state biacetyl) and hydrogen abstraction (benzhydrol radical) are discussed.

4. Nonlinear fitness landscape of a molecular pathway.

PubMed

Perfeito, Lilia; Ghozzi, Stéphane; Berg, Johannes; Schnetz, Karin; Lässig, Michael

2011-07-01

Genes are regulated because their expression involves a fitness cost to the organism. The production of proteins by transcription and translation is a well-known cost factor, but the enzymatic activity of the proteins produced can also reduce fitness, depending on the internal state and the environment of the cell. Here, we map the fitness costs of a key metabolic network, the lactose utilization pathway in Escherichia coli. We measure the growth of several regulatory lac operon mutants in different environments inducing expression of the lac genes. We find a strikingly nonlinear fitness landscape, which depends on the production rate and on the activity rate of the lac proteins. A simple fitness model of the lac pathway, based on elementary biophysical processes, predicts the growth rate of all observed strains. The nonlinearity of fitness is explained by a feedback loop: production and activity of the lac proteins reduce growth, but growth also affects the density of these molecules. This nonlinearity has important consequences for molecular function and evolution. It generates a cliff in the fitness landscape, beyond which populations cannot maintain growth. In viable populations, there is an expression barrier of the lac genes, which cannot be exceeded in any stationary growth process. Furthermore, the nonlinearity determines how the fitness of operon mutants depends on the inducer environment. We argue that fitness nonlinearities, expression barriers, and gene-environment interactions are generic features of fitness landscapes for metabolic pathways, and we discuss their implications for the evolution of regulation. PMID:21814515

5. The Least-Squares Calibration on the Micro-Arcsecond Metrology Test Bed

NASA Technical Reports Server (NTRS)

Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.

2006-01-01

The Space Interferometry Mission (S1M) will measure optical path differences (OPDs) with an accuracy of tens of picometers, requiring precise calibration of the instrument. In this article, we present a calibration approach based on fitting star light interference fringes in the interferometer using a least-squares algorithm. The algorithm is first analyzed for the case of a monochromatic light source with a monochromatic fringe model. Using fringe data measured on the Micro-Arcsecond Metrology (MAM) testbed with a laser source, the error in the determination of the wavelength is shown to be less than 10pm. By using a quasi-monochromatic fringe model, the algorithm can be extended to the case of a white light source with a narrow detection bandwidth. In SIM, because of the finite bandwidth of each CCD pixel, the effect of the fringe envelope can not be neglected, especially for the larger optical path difference range favored for the wavelength calibration.

6. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

Loredo, Thomas J.

2016-01-01

This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

7. The program LOPT for least-squares optimization of energy levels

Kramida, A. E.

2011-02-01

The article describes a program that solves the least-squares optimization problem for finding the energy levels of a quantum-mechanical system based on a set of measured energy separations or wavelengths of transitions between those energy levels, as well as determining the Ritz wavelengths of transitions and their uncertainties. The energy levels are determined by solving the matrix equation of the problem, and the uncertainties of the Ritz wavenumbers are determined from the covariance matrix of the problem. Program summaryProgram title: LOPT Catalogue identifier: AEHM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 19 254 No. of bytes in distributed program, including test data, etc.: 427 839 Distribution format: tar.gz Programming language: Perl v.5 Computer: PC, Mac, Unix workstations Operating system: MS Windows (XP, Vista, 7), Mac OS X, Linux, Unix (AIX) RAM: 3 Mwords or more Word size: 32 or 64 Classification: 2.2 Nature of problem: The least-squares energy-level optimization problem, i.e., finding a set of energy level values that best fits the given set of transition intervals. Solution method: The solution of the least-squares problem is found by solving the corresponding linear matrix equation, where the matrix is constructed using a new method with variable substitution. Restrictions: A practical limitation on the size of the problem N is imposed by the execution time, which scales as N and depends on the computer. Unusual features: Properly rounds the resulting data and formats the output in a format suitable for viewing with spreadsheet editing software. Estimates numerical errors resulting from the limited machine precision. Running time: 1 s for N=100, or 60 s for N=400 on a typical PC.

8. Exploring the limits of cryospectroscopy: Least-squares based approaches for analyzing the self-association of HCl

De Beuckeleer, Liene I.; Herrebout, Wouter A.

2016-02-01

To rationalize the concentration dependent behavior observed for a large spectral data set of HCl recorded in liquid argon, least-squares based numerical methods are developed and validated. In these methods, for each wavenumber a polynomial is used to mimic the relation between monomer concentrations and measured absorbances. Least-squares fitting of higher degree polynomials tends to overfit and thus leads to compensation effects where a contribution due to one species is compensated for by a negative contribution of another. The compensation effects are corrected for by carefully analyzing, using AIC and BIC information criteria, the differences observed between consecutive fittings when the degree of the polynomial model is systematically increased, and by introducing constraints prohibiting negative absorbances to occur for the monomer or for one of the oligomers. The method developed should allow other, more complicated self-associating systems to be analyzed with a much higher accuracy than before.

9. Lameness detection challenges in automated milking systems addressed with partial least squares discriminant analysis.

PubMed

Garcia, E; Klaas, I; Amigo, J M; Bro, R; Enevoldsen, C

2014-12-01

Lameness causes decreased animal welfare and leads to higher production costs. This study explored data from an automatic milking system (AMS) to model on-farm gait scoring from a commercial farm. A total of 88 cows were gait scored once per week, for 2 5-wk periods. Eighty variables retrieved from AMS were summarized week-wise and used to predict 2 defined classes: nonlame and clinically lame cows. Variables were represented with 2 transformations of the week summarized variables, using 2-wk data blocks before gait scoring, totaling 320 variables (2 × 2 × 80). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3 or 4/4) or not lame (score 1/4). Both models achieved sensitivity and specificity values around 80%, both in calibration and cross-validation. At the optimum values in the receiver operating characteristic curve, the false-positive rate was 28% in the parity 1 model, whereas in the parity 2 model it was about half (16%), which makes it more suitable for practical application; the model error rates were, 23 and 19%, respectively. Based on data registered automatically from one AMS farm, we were able to discriminate nonlame and lame cows, where partial least squares discriminant analysis achieved similar performance to the reference method. PMID:25282423

10. Discrete variable representation in electronic structure theory: quadrature grids for least-squares tensor hypercontraction.

PubMed

Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David

2013-05-21

We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes. PMID:23697409

11. Discrete variable representation in electronic structure theory: Quadrature grids for least-squares tensor hypercontraction

Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David

2013-05-01

We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes.

12. Comparison of structural and least-squares lines for estimating geologic relations

USGS Publications Warehouse

Williams, G.P.; Troutman, B.M.

1990-01-01

Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.

13. UNIPALS: SOFTWARE FOR PRINCIPAL COMPONENTS ANALYSIS AND PARTIAL LEAST SQUARES REGRESSION

EPA Science Inventory

Software for the analysis of multivariate chemical data by principal components and partial least squares methods is included on disk. he methods extract latent variables from the chemical data with the UNIversal PArtial Least Squares or UNIPALS algorithm. he software is written ...

14. First-Order System Least-Squares for Second-Order Elliptic Problems with Discontinuous Coefficients

NASA Technical Reports Server (NTRS)

Manteuffel, Thomas A.; McCormick, Stephen F.; Starke, Gerhard

1996-01-01

The first-order system least-squares methodology represents an alternative to standard mixed finite element methods. Among its advantages is the fact that the finite element spaces approximating the pressure and flux variables are not restricted by the inf-sup condition and that the least-squares functional itself serves as an appropriate error measure. This paper studies the first-order system least-squares approach for scalar second-order elliptic boundary value problems with discontinuous coefficients. Ellipticity of an appropriately scaled least-squares bilinear form of the size of the jumps in the coefficients leading to adequate finite element approximation results. The occurrence of singularities at interface corners and cross-points is discussed. and a weighted least-squares functional is introduced to handle such cases. Numerical experiments are presented for two test problems to illustrate the performance of this approach.

15. A least-squares parameter estimation algorithm for switched hammerstein systems with applications to the VOR

NASA Technical Reports Server (NTRS)

Kukreja, Sunil L.; Kearney, Robert E.; Galiana, Henrietta L.

2005-01-01

A "Multimode" or "switched" system is one that switches between various modes of operation. When a switch occurs from one mode to another, a discontinuity may result followed by a smooth evolution under the new regime. Characterizing the switching behavior of these systems is not well understood and, therefore, identification of multimode systems typically requires a preprocessing step to classify the observed data according to a mode of operation. A further consequence of the switched nature of these systems is that data available for parameter estimation of any subsystem may be inadequate. As such, identification and parameter estimation of multimode systems remains an unresolved problem. In this paper, we 1) show that the NARMAX model structure can be used to describe the impulsive-smooth behavior of switched systems, 2) propose a modified extended least squares (MELS) algorithm to estimate the coefficients of such models, and 3) demonstrate its applicability to simulated and real data from the Vestibulo-Ocular Reflex (VOR). The approach will also allow the identification of other nonlinear bio-systems, suspected of containing "hard" nonlinearities.

16. Multiple concurrent recursive least squares identification with application to on-line spacecraft mass-property identification

NASA Technical Reports Server (NTRS)

Wilson, Edward (Inventor)

2006-01-01

The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.

17. A new formulation for total least square error method in d-dimensional space with mapping to a parametric line

Skala, Vaclav

2016-06-01

There are many practical applications based on the Least Square Error (LSE) or Total Least Square Error (TLSE) methods. Usually the standard least square error is used due to its simplicity, but it is not an optimal solution, as it does not optimize distance, but square of a distance. The TLSE method, respecting the orthogonality of a distance measurement, is computed in d-dimensional space, i.e. for points given in E2 a line π in E2, resp. for points given in E3 a plane ρ in E3, fitting the TLSE criteria are found. However, some tasks in physical sciences lead to a slightly different problem. In this paper, a new TSLE method is introduced for solving a problem when data are given in E3 a line π ∈ E3 is to be found fitting the TLSE criterion. The presented approach is applicable for a general d-dimensional case, i.e. when points are given in Ed a line π ∈ Ed is to be found. This formulation is different from the TLSE formulation.

18. Weighted least-squares deconvolution method for discovery of group differences between complex biofluid 1H NMR spectra

Gipson, Geoffrey T.; Tatsuoka, Kay S.; Sweatman, Brian C.; Connor, Susan C.

2006-12-01

Biomarker discovery through analysis of high-throughput NMR data is a challenging, time-consuming process due to the requirement of sophisticated, dataset specific preprocessing techniques and the inherent complexity of the data. Here, we demonstrate the use of weighted, constrained least-squares for fitting a linear mixture of reference standard data to complex urine NMR spectra as an automated way of utilizing current assignment knowledge and the ability to deconvolve confounded spectral regions. Following the least-squares fit, univariate statistics were used to identify metabolites associated with group differences. This method was evaluated through applications on simulated datasets and a murine diabetes dataset. Furthermore, we examined the differential ability of various weighting metrics to correctly identify discriminative markers. Our findings suggest that the weighted least-squares approach is effective for identifying biochemical discriminators of varying physiological states. Additionally, the superiority of specific weighting metrics is demonstrated in particular datasets. An additional strength of this methodology is the ability for individual investigators to couple this analysis with laboratory specific preprocessing techniques.

19. Time-dependent speciation and extinction from phylogenies: a least squares approach.

PubMed

2011-03-01

Molecular phylogenies contribute to the study of the patterns and processes of macroevolution even though past events (fossils) are not recorded in these data. In this article, I consider the general time-dependent birth-death model to fit any model of temporal variation in speciation and extinction to phylogenies. I establish formulae to compute the expected cumulative distribution function of branching times for any model, and, building on previous published works, I derive maximum likelihood estimators. Some limitations of the likelihood approach are described, and a fitting procedure based on least squares is developed that alleviates the shortcomings of maximum likelihood in the present context. Parametric and nonparametric bootstrap procedures are developed to assess uncertainty in the parameter estimates, the latter version giving narrower confidence intervals and being faster to compute. I also present several general algorithms of tree simulation in continuous time. I illustrate the application of this approach with the analysis of simulated datasets, and two published phylogenies of primates (Catarrhinae) and lizards (Agamidae). PMID:21054360

20. Multilevel solvers of first-order system least-squares for Stokes equations

SciTech Connect

Lai, Chen-Yao G.

1996-12-31

Recently, The use of first-order system least squares principle for the approximate solution of Stokes problems has been extensively studied by Cai, Manteuffel, and McCormick. In this paper, we study multilevel solvers of first-order system least-squares method for the generalized Stokes equations based on the velocity-vorticity-pressure formulation in three dimensions. The least-squares functionals is defined to be the sum of the L{sup 2}-norms of the residuals, which is weighted appropriately by the Reynolds number. We develop convergence analysis for additive and multiplicative multilevel methods applied to the resulting discrete equations.

1. Least-squares methods involving the H{sup -1} inner product

SciTech Connect

Pasciak, J.

1996-12-31

Least-squares methods are being shown to be an effective technique for the solution of elliptic boundary value problems. However, the methods differ depending on the norms in which they are formulated. For certain problems, it is much more natural to consider least-squares functionals involving the H{sup -1} norm. Such norms give rise to improved convergence estimates and better approximation to problems with low regularity solutions. In addition, fewer new variables need to be added and less stringent boundary conditions need to be imposed. In this talk, I will describe some recent developments involving least-squares methods utilizing the H{sup -1} inner product.

2. Domain Decomposition Algorithms for First-Order System Least Squares Methods

NASA Technical Reports Server (NTRS)

Pavarino, Luca F.

1996-01-01

Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

3. Semiparametric regression of multidimensional genetic pathway data: least-squares kernel machines and linear mixed models.

PubMed

Liu, Dawei; Lin, Xihong; Ghosh, Debashis

2007-12-01

We consider a semiparametric regression model that relates a normal outcome to covariates and a genetic pathway, where the covariate effects are modeled parametrically and the pathway effect of multiple gene expressions is modeled parametrically or nonparametrically using least-squares kernel machines (LSKMs). This unified framework allows a flexible function for the joint effect of multiple genes within a pathway by specifying a kernel function and allows for the possibility that each gene expression effect might be nonlinear and the genes within the same pathway are likely to interact with each other in a complicated way. This semiparametric model also makes it possible to test for the overall genetic pathway effect. We show that the LSKM semiparametric regression can be formulated using a linear mixed model. Estimation and inference hence can proceed within the linear mixed model framework using standard mixed model software. Both the regression coefficients of the covariate effects and the LSKM estimator of the genetic pathway effect can be obtained using the best linear unbiased predictor in the corresponding linear mixed model formulation. The smoothing parameter and the kernel parameter can be estimated as variance components using restricted maximum likelihood. A score test is developed to test for the genetic pathway effect. Model/variable selection within the LSKM framework is discussed. The methods are illustrated using a prostate cancer data set and evaluated using simulations. PMID:18078480

4. Multi-frequency Phase Unwrap from Noisy Data: Adaptive Least Squares Approach

2010-04-01

Multiple frequency interferometry is, basically, a phase acquisition strategy aimed at reducing or eliminating the ambiguity of the wrapped phase observations or, equivalently, reducing or eliminating the fringe ambiguity order. In multiple frequency interferometry, the phase measurements are acquired at different frequencies (or wavelengths) and recorded using the corresponding sensors (measurement channels). Assuming that the absolute phase to be reconstructed is piece-wise smooth, we use a nonparametric regression technique for the phase reconstruction. The nonparametric estimates are derived from a local least squares criterion, which, when applied to the multifrequency data, yields denoised (filtered) phase estimates with extended ambiguity (periodized), compared with the phase ambiguities inherent to each measurement frequency. The filtering algorithm is based on local polynomial (LPA) approximation for design of nonlinear filters (estimators) and adaptation of these filters to unknown smoothness of the spatially varying absolute phase [9]. For phase unwrapping, from filtered periodized data, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [1]. Simulations give evidence that the proposed algorithm yields state-of-the-art performance for continuous as well as for discontinues phase surfaces, enabling phase unwrapping in extraordinary difficult situations when all other algorithms fail.

5. Fishery landing forecasting using EMD-based least square support vector machine models

Shabri, Ani

2015-05-01

In this paper, the novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EMD) and least square support machine (LSSVM) is proposed to improve the accuracy of fishery landing forecasting. This hybrid is formulated specifically to address in modeling fishery landing, which has high nonlinear, non-stationary and seasonality time series which can hardly be properly modelled and accurately forecasted by traditional statistical models. In the hybrid model, EMD is used to decompose original data into a finite and often small number of sub-series. The each sub-series is modeled and forecasted by a LSSVM model. Finally the forecast of fishery landing is obtained by aggregating all forecasting results of sub-series. To assess the effectiveness and predictability of EMD-LSSVM, monthly fishery landing record data from East Johor of Peninsular Malaysia, have been used as a case study. The result shows that proposed model yield better forecasts than Autoregressive Integrated Moving Average (ARIMA), LSSVM and EMD-ARIMA models on several criteria..

6. Least squares solutions of the HJB equation with neural network value-function approximators.

PubMed

Tassa, Yuval; Erez, Tom

2007-07-01

In this paper, we present an empirical study of iterative least squares minimization of the Hamilton-Jacobi-Bellman (HJB) residual with a neural network (NN) approximation of the value function. Although the nonlinearities in the optimal control problem and NN approximator preclude theoretical guarantees and raise concerns of numerical instabilities, we present two simple methods for promoting convergence, the effectiveness of which is presented in a series of experiments. The first method involves the gradual increase of the horizon time scale, with a corresponding gradual increase in value function complexity. The second method involves the assumption of stochastic dynamics which introduces a regularizing second derivative term to the HJB equation. A gradual reduction of this term provides further stabilization of the convergence. We demonstrate the solution of several problems, including the 4-D inverted-pendulum system with bounded control. Our approach requires no initial stabilizing policy or any restrictive assumptions on the plant or cost function, only knowledge of the plant dynamics. In the Appendix, we provide the equations for first- and second-order differential backpropagation. PMID:17668659

7. Improved prediction of drug-target interactions using regularized least squares integrating with kernel fusion technique.

PubMed

Hao, Ming; Wang, Yanli; Bryant, Stephen H

2016-02-25

Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision-recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. PMID:26851083

8. [Main Components of Xinjiang Lavender Essential Oil Determined by Partial Least Squares and Near Infrared Spectroscopy].

PubMed

Liao, Xiang; Wang, Qing; Fu, Ji-hong; Tang, Jun

2015-09-01

This work was undertaken to establish a quantitative analysis model which can rapid determinate the content of linalool, linalyl acetate of Xinjiang lavender essential oil. Totally 165 lavender essential oil samples were measured by using near infrared absorption spectrum (NIR), after analyzing the near infrared spectral absorption peaks of all samples, lavender essential oil have abundant chemical information and the interference of random noise may be relatively low on the spectral intervals of 7100~4500 cm(-1). Thus, the PLS models was constructed by using this interval for further analysis. 8 abnormal samples were eliminated. Through the clustering method, 157 lavender essential oil samples were divided into 105 calibration set samples and 52 validation set samples. Gas chromatography mass spectrometry (GC-MS) was used as a tool to determine the content of linalool and linalyl acetate in lavender essential oil. Then the matrix was established with the GC-MS raw data of two compounds in combination with the original NIR data. In order to optimize the model, different pretreatment methods were used to preprocess the raw NIR spectral to contrast the spectral filtering effect, after analysizing the quantitative model results of linalool and linalyl acetate, the root mean square error prediction (RMSEP) of orthogonal signal transformation (OSC) was 0.226, 0.558, spectrally, it was the optimum pretreatment method. In addition, forward interval partial least squares (FiPLS) method was used to exclude the wavelength points which has nothing to do with determination composition or present nonlinear correlation, finally 8 spectral intervals totally 160 wavelength points were obtained as the dataset. Combining the data sets which have optimized by OSC-FiPLS with partial least squares (PLS) to establish a rapid quantitative analysis model for determining the content of linalool and linalyl acetate in Xinjiang lavender essential oil, numbers of hidden variables of two

9. Fast algorithm for the solution of large-scale non-negativity constrained least squares problems.

SciTech Connect

Van Benthem, Mark Hilary; Keenan, Michael Robert

2004-06-01

Algorithms for multivariate image analysis and other large-scale applications of multivariate curve resolution (MCR) typically employ constrained alternating least squares (ALS) procedures in their solution. The solution to a least squares problem under general linear equality and inequality constraints can be reduced to the solution of a non-negativity-constrained least squares (NNLS) problem. Thus the efficiency of the solution to any constrained least square problem rests heavily on the underlying NNLS algorithm. We present a new NNLS solution algorithm that is appropriate to large-scale MCR and other ALS applications. Our new algorithm rearranges the calculations in the standard active set NNLS method on the basis of combinatorial reasoning. This rearrangement serves to reduce substantially the computational burden required for NNLS problems having large numbers of observation vectors.

10. Recursive least squares method of regression coefficients estimation as a special case of Kalman filter

Borodachev, S. M.

2016-06-01

The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.

11. Least-squares finite element discretizations of neutron transport equations in 3 dimensions

SciTech Connect

Manteuffel, T.A; Ressel, K.J.; Starkes, G.

1996-12-31

The least-squares finite element framework to the neutron transport equation introduced in is based on the minimization of a least-squares functional applied to the properly scaled neutron transport equation. Here we report on some practical aspects of this approach for neutron transport calculations in three space dimensions. The systems of partial differential equations resulting from a P{sub 1} and P{sub 2} approximation of the angular dependence are derived. In the diffusive limit, the system is essentially a Poisson equation for zeroth moment and has a divergence structure for the set of moments of order 1. One of the key features of the least-squares approach is that it produces a posteriori error bounds. We report on the numerical results obtained for the minimum of the least-squares functional augmented by an additional boundary term using trilinear finite elements on a uniform tesselation into cubes.

12. Iterative least-squares solvers for the Navier-Stokes equations

SciTech Connect

Bochev, P.

1996-12-31

In the recent years finite element methods of least-squares type have attracted considerable attention from both mathematicians and engineers. This interest has been motivated, to a large extent, by several valuable analytic and computational properties of least-squares variational principles. In particular, finite element methods based on such principles circumvent Ladyzhenskaya-Babuska-Brezzi condition and lead to symmetric and positive definite algebraic systems. Thus, it is not surprising that numerical solution of fluid flow problems has been among the most promising and successful applications of least-squares methods. In this context least-squares methods offer significant theoretical and practical advantages in the algorithmic design, which makes resulting methods suitable, among other things, for large-scale numerical simulations.

13. Adaptive slab laser beam quality improvement using a weighted least-squares reconstruction algorithm.

PubMed

Chen, Shanqiu; Dong, LiZhi; Chen, XiaoJun; Tan, Yi; Liu, Wenjin; Wang, Shuai; Yang, Ping; Xu, Bing; Ye, YuTang

2016-04-10

Adaptive optics is an important technology for improving beam quality in solid-state slab lasers. However, there are uncorrectable aberrations in partial areas of the beam. In the criterion of the conventional least-squares reconstruction method, it makes the zones with small aberrations nonsensitive and hinders this zone from being further corrected. In this paper, a weighted least-squares reconstruction method is proposed to improve the relative sensitivity of zones with small aberrations and to further improve beam quality. Relatively small weights are applied to the zones with large residual aberrations. Comparisons of results show that peak intensity in the far field improved from 1242 analog digital units (ADU) to 2248 ADU, and beam quality β improved from 2.5 to 2.0. This indicates the weighted least-squares method has better performance than the least-squares reconstruction method when there are large zonal uncorrectable aberrations in the slab laser system. PMID:27139877

14. Semiclassical calculations of tunneling using interpolating moving least-squares potentials

Pham, Phong

The interpolating moving least-squares (IMLS) and Local-IMLS methods are incorporated into semiclassical trajectory simulation. Issues related to the implementation are investigated. Potential energy surface (PES) constructed by the IMLS/L-IMLS methods is used to study tunneling in polyatomic systems HONO and malonaldehyde, where direct dynamics becomes prohibitively expensive at high ab initio levels. To study cis--trans isomerization in HONO, the PES is constructed by L-IMLS fitting at the MP4(SDQ)/6-31++G(d,p) level with the HDMR(5,3,3) basis set. Results obtained can be compared with the others in the literature. Semiclassical rates are close to the referenced quantum mechanical ones. The isomerization is governed by energy transfer into the reaction coordinate---the torsional mode; the rate is strongly mode-selective, and much faster for the cis--trans direction than for the opposite one. To study the ground-state splitting of malonaldehyde, the PES is first constructed by single-level L-IMLS fitting at the MP2/6-31G(d,p) level with the HDMR(3,2) basis set. The dual-level method is then employed for increasing accuracy of the PES and reducing computational cost using MP4/6-31G(d,p) as the high level method. Results obtained can be compared with the others in the literature. For 0.5 kcal/mol fitting tolerance the splitting is 38.7 and 8.8 cm-1 at MP2 single-level, and 29.6 and 5.5 cm-1 at MP4 dual-level for H9 and D5D9 isotopomers respectively, compared to the experiment of 21.6 and 2.884 cm-1 . Splitting is within two times of the experiment and agrees with other quantum mechanical and semiclassical studies.

15. The comparison of robust partial least squares regression with robust principal component regression on a real

Polat, Esra; Gunay, Suleyman

2013-10-01

One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

16. Simultaneous backscatter and attenuation estimation using a least squares method with constraints.

PubMed

Nam, Kibo; Zagzebski, James A; Hall, Timothy J

2011-12-01

Backscatter and attenuation variations are essential contrast mechanisms in ultrasound B-mode imaging. Emerging quantitative ultrasound methods extract and display absolute values of these tissue properties. However, in clinical applications, backscatter and attenuation parameters sometimes are not easily measured because of tissues inhomogeneities above the region-of-interest (ROI). We describe a least squares method (LSM) that fits the echo signal power spectra from a ROI to a three-parameter tissue model that simultaneously yields estimates of attenuation losses and backscatter coefficients. To test the method, tissue-mimicking phantoms with backscatter and attenuation contrast as well as uniform phantoms were scanned with linear array transducers on a Siemens S2000. Attenuation and backscatter coefficients estimated by the LSM were compared with those derived using a reference phantom method (Yao et al. 1990). Results show that the LSM yields effective attenuation coefficients for uniform phantoms comparable to values derived using the reference phantom method. For layered phantoms exhibiting nonuniform backscatter, the LSM resulted in smaller attenuation estimation errors than the reference phantom method. Backscatter coefficients derived using the LSM were in excellent agreement with values obtained from laboratory measurements on test samples and with theory. The LSM is more immune to depth-dependent backscatter changes than commonly used reference phantom methods. PMID:21963038

17. Nucleus detection using gradient orientation information and linear least squares regression

Kwak, Jin Tae; Hewitt, Stephen M.; Xu, Sheng; Pinto, Peter A.; Wood, Bradford J.

2015-03-01

Computerized histopathology image analysis enables an objective, efficient, and quantitative assessment of digitized histopathology images. Such analysis often requires an accurate and efficient detection and segmentation of histological structures such as glands, cells and nuclei. The segmentation is used to characterize tissue specimens and to determine the disease status or outcomes. The segmentation of nuclei, in particular, is challenging due to the overlapping or clumped nuclei. Here, we propose a nuclei seed detection method for the individual and overlapping nuclei that utilizes the gradient orientation or direction information. The initial nuclei segmentation is provided by a multiview boosting approach. The angle of the gradient orientation is computed and traced for the nuclear boundaries. Taking the first derivative of the angle of the gradient orientation, high concavity points (junctions) are discovered. False junctions are found and removed by adopting a greedy search scheme with the goodness-of-fit statistic in a linear least squares sense. Then, the junctions determine boundary segments. Partial boundary segments belonging to the same nucleus are identified and combined by examining the overlapping area between them. Using the final set of the boundary segments, we generate the list of seeds in tissue images. The method achieved an overall precision of 0.89 and a recall of 0.88 in comparison to the manual segmentation.

18. Computing ordinary least-squares parameter estimates for the National Descriptive Model of Mercury in Fish

USGS Publications Warehouse

Donato, David I.

2013-01-01

A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.

19. Methods for Least Squares Data Smoothing by Adjustment of Divided Differences

Demetriou, I. C.

2008-09-01

A brief survey is presented for the main methods that are used in least squares data smoothing by adjusting the signs of divided differences of the smoothed values. The most distinctive feature of the smoothing approach is that it provides automatically a piecewise monotonic or a piecewise convex/concave fit to the data. The data are measured values of a function of one variable that contain random errors. As a consequence of the errors, the number of sign alterations in the sequence of mth divided differences is usually unacceptably large, where m is a prescribed positive integer. Therefore, we make the least sum of squares change to the measurements by requiring the sequence of the divided differences of order m to have at most k-1 sign changes, for some positive integer k. Although, it is a combinatorial problem, whose solution can require about O(nk) quadratic programming calculations in n variables and n-m constraints, where n is the number of data, very efficient algorithms have been developed for the cases when m = 1 or m = 2 and k is arbitrary, as well as when m>2 for small values of k. Attention is paid to the purpose of each method instead of to its details. Some software packages make the methods publicly accessible through library systems.

20. SIMULTANEOUS BACKSCATTER AND ATTENUATION ESTIMATION USING A LEAST SQUARES METHOD WITH CONSTRAINTS

PubMed Central

Nam, Kibo; Zagzebski, James A.; Hall, Timothy J.

2011-01-01

Backscatter and attenuation variations are essential contrast mechanisms in ultrasound B-mode imaging. Emerging Quantitative Ultrasound methods extract and display absolute values of these tissue properties. However, in clinical applications, backscatter and attenuation parameters sometimes are not easily measured because of tissues inhomogeneities above the region of interest. We describe a least squares method (LSM) that fits the echo signal power spectra from a region of interest (ROI) to a 3-parameter tissue model that simultaneously yields estimates of attenuation losses and backscatter coefficients. To test the method, tissue-mimicking phantoms with backscatter and attenuation contrast as well as uniform phantoms were scanned with linear array transducers on a Siemens S2000. Attenuation and backscatter coefficients estimated by the LSM were compared with those derived using a reference phantom method (Yao et al. 1990). Results show that the LSM yields effective attenuation coefficients for uniform phantoms comparable to values derived using the reference phantom method. For layered phantoms exhibiting non-uniform backscatter, the LSM resulted in smaller attenuation estimation errors than the reference phantom method. Backscatter coefficients derived using the LSM were in excellent agreement with values obtained from laboratory measurements on test samples and with theory. The LSM is more immune to depth-dependent backscatter changes than commonly used reference phantom methods. PMID:21963038

1. Analysis of crustal deformation and strain characteristics in the Tianshan Mountains with least-squares collocation

Li, S. P.; Chen, G.; Li, J. W.

2015-11-01

By fitting the observed velocity field of the Tianshan Mountains from 1992 to 2006 with least-squares collocation, we established a velocity field model in this region. The velocity field model reflects the crustal deformation characteristics of the Tianshan reasonably well. From the Tarim Basin to the Junggar Basin and Kazakh platform, the crustal deformation decreases gradually. Divided at 82° E, the convergence rates in the west are obviously higher than those in the east. We also calculated the parameter values for crustal strain in the Tianshan Mountains. The results for maximum shear strain exhibited a concentration of significantly high values at Wuqia and its western regions, and the values reached a maxima of 4.4×10-8 a-1. According to isogram distributions for the surface expansion rate, we found evidence that the Tianshan Mountains have been suffering from strong lateral extrusion by the basin on both sides. Combining this analysis with existing results for focal mechanism solutions from 1976 to 2014, we conclude that it should be easy for a concentration of earthquake events to occur in regions where maximum shear strains accumulate or mutate. For the Tianshan Mountains, the possibility of strong earthquakes in Wuqia-Jiashi and Lake Issyk-Kul will persist over the long term.

2. Least Squares Evaluations for Form and Profile Errors of Ellipse Using Coordinate Data

Liu, Fei; Xu, Guanghua; Liang, Lin; Zhang, Qing; Liu, Dan

2016-04-01

To improve the measurement and evaluation of form error of an elliptic section, an evaluation method based on least squares fitting is investigated to analyze the form and profile errors of an ellipse using coordinate data. Two error indicators for defining ellipticity are discussed, namely the form error and the profile error, and the difference between both is considered as the main parameter for evaluating machining quality of surface and profile. Because the form error and the profile error rely on different evaluation benchmarks, the major axis and the foci rather than the centre of an ellipse are used as the evaluation benchmarks and can accurately evaluate a tolerance range with the separated form error and profile error of workpiece. Additionally, an evaluation program based on the LS model is developed to extract the form error and the profile error of the elliptic section, which is well suited for separating the two errors by a standard program. Finally, the evaluation method about the form and profile errors of the ellipse is applied to the measurement of skirt line of the piston, and results indicate the effectiveness of the evaluation. This approach provides the new evaluation indicators for the measurement of form and profile errors of ellipse, which is found to have better accuracy and can thus be used to solve the difficult of the measurement and evaluation of the piston in industrial production.

3. Towards a Generic Method for Building-Parcel Vector Data Adjustment by Least Squares

Méneroux, Y.; Brasebin, M.

2015-08-01

Being able to merge high quality and complete building models with parcel data is of a paramount importance for any application dealing with urban planning. However since parcel boundaries often stand for the legal reference frame, the whole correction will be exclusively done on building features. Then a major task is to identify spatial relationships and properties that buildings should keep through the conflation process. The purpose of this paper is to describe a method based on least squares approach to ensure that buildings fit consistently into parcels while abiding by a set of standard constraints that may concern most of urban applications. An important asset of our model is that it can be easily extended to comply with more specific constraints. In addition, results analysis also demonstrates that it provides significantly better output than a basic algorithm relying on an individual correction of features, especially regarding conservation of metrics and topological relationships between buildings. In the future, we would like to include more specific constraints to retrieve the actual positions of buildings relatively to parcel borders and we plan to assess the contribution of our algorithm on the quality of urban application outputs.

4. Kinetic microplate bioassays for relative potency of antibiotics improved by partial Least Square (PLS) regression.

PubMed

Francisco, Fabiane Lacerda; Saviano, Alessandro Morais; Almeida, Túlia de Souza Botelho; Lourenço, Felipe Rebello

2016-05-01

Microbiological assays are widely used to estimate the relative potencies of antibiotics in order to guarantee the efficacy, safety, and quality of drug products. Despite of the advantages of turbidimetric bioassays when compared to other methods, it has limitations concerning the linearity and range of the dose-response curve determination. Here, we proposed to use partial least squares (PLS) regression to solve these limitations and to improve the prediction of relative potencies of antibiotics. Kinetic-reading microplate turbidimetric bioassays for apramacyin and vancomycin were performed using Escherichia coli (ATCC 8739) and Bacillus subtilis (ATCC 6633), respectively. Microbial growths were measured as absorbance up to 180 and 300min for apramycin and vancomycin turbidimetric bioassays, respectively. Conventional dose-response curves (absorbances or area under the microbial growth curve vs. log of antibiotic concentration) showed significant regression, however there were significant deviation of linearity. Thus, they could not be used for relative potency estimations. PLS regression allowed us to construct a predictive model for estimating the relative potencies of apramycin and vancomycin without over-fitting and it improved the linear range of turbidimetric bioassay. In addition, PLS regression provided predictions of relative potencies equivalent to those obtained from agar diffusion official methods. Therefore, we conclude that PLS regression may be used to estimate the relative potencies of antibiotics with significant advantages when compared to conventional dose-response curve determination. PMID:26971814

5. Multi-element least square HDMR methods and their applications for stochastic multiscale model reduction

SciTech Connect

Jiang, Lijian Li, Xinping

2015-08-01

Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in each subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain

6. An analysis of the least-squares problem for the DSN systematic pointing error model

NASA Technical Reports Server (NTRS)

Alvarez, L. S.

1991-01-01

A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

7. On the accuracy of least squares methods in the presence of corner singularities

NASA Technical Reports Server (NTRS)

Cox, C. L.; Fix, G. J.

1985-01-01

Elliptic problems with corner singularities are discussed. Finite element approximations based on variational principles of the least squares type tend to display poor convergence properties in such contexts. Moreover, mesh refinement or the use of special singular elements do not appreciably improve matters. It is shown that if the least squares formulation is done in appropriately weighted space, then optimal convergence results in unweighted spaces like L(2).

8. Least-Squares Regression and Spectral Residual Augmented Classical Least-Squares Chemometric Models for Stability-Indicating Analysis of Agomelatine and Its Degradation Products: A Comparative Study.

PubMed

Naguib, Ibrahim A; Abdelrahman, Maha M; El Ghobashy, Mohamed R; Ali, Nesma A

2016-01-01

Two accurate, sensitive, and selective stability-indicating methods are developed and validated for simultaneous quantitative determination of agomelatine (AGM) and its forced degradation products (Deg I and Deg II), whether in pure forms or in pharmaceutical formulations. Partial least-squares regression (PLSR) and spectral residual augmented classical least-squares (SRACLS) are two chemometric models that are being subjected to a comparative study through handling UV spectral data in range (215-350 nm). For proper analysis, a three-factor, four-level experimental design was established, resulting in a training set consisting of 16 mixtures containing different ratios of interfering species. An independent test set consisting of eight mixtures was used to validate the prediction ability of the suggested models. The results presented indicate the ability of mentioned multivariate calibration models to analyze AGM, Deg I, and Deg II with high selectivity and accuracy. The analysis results of the pharmaceutical formulations were statistically compared to the reference HPLC method, with no significant differences observed regarding accuracy and precision. The SRACLS model gives comparable results to the PLSR model; however, it keeps the qualitative spectral information of the classical least-squares algorithm for analyzed components. PMID:26987554

9. On realizations of least-squares estimation and Kalman filtering by systolic arrays

NASA Technical Reports Server (NTRS)

Chen, M. J.; Yao, K.

1986-01-01

Least-squares (LS) estimation is a basic operation in many signal processing problems. Given y = Ax + v, where A is a m x n coefficient matrix, y is a m x 1 observation vector, and v is a m x 1 zero mean white noise vector, a simple least-squares solution is finding the estimated vector x which minimizes the norm of /Ax-y/. It is well known that for an ill-conditioned matrix A, solving least-squares problems by orthogonal triangular (QR) decomposition and back substitution has robust numerical properties under finite word length effect since 2-norm is preserved. Many fast algorithms have been proposed and applied to systolic arrays. Gentleman-Kung (1981) first presented the trianglular systolic array for a basic Givens reduction. McWhirter (1983) used this array structure to find the least-squares estimation errors. Then by geometric approach, several different systolic array realizations of the recursive least-squares estimation algorithms of Lee et al (1981) were derived by Kalson-Yao (1985). Basic QR decomposition algorithms are considered in this paper and it is found that under a one-row time updating situation, the Householder transformation degenerates to a simple Givens reduction. Next, an improved least-squares estimation algorithm is derived by considering a modified version of fast Givens reduction. From this approach, the basic relationship between Givens reduction and Modified-Gram-Schmidt transformation can easily be understood. This improved algorithm also has simpler computational and inter-cell connection complexities while compared with other known least-squares algorithms and is more realistic for systolic array implementation.

10. Communication: Acceleration of coupled cluster singles and doubles via orbital-weighted least-squares tensor hypercontraction

SciTech Connect

Parrish, Robert M.; Sherrill, C. David; Hohenstein, Edward G.; Kokkila, Sara I. L.; Martínez, Todd J.

2014-05-14

We apply orbital-weighted least-squares tensor hypercontraction decomposition of the electron repulsion integrals to accelerate the coupled cluster singles and doubles (CCSD) method. Using accurate and flexible low-rank factorizations of the electron repulsion integral tensor, we are able to reduce the scaling of the most vexing particle-particle ladder term in CCSD from O(N{sup 6}) to O(N{sup 5}), with remarkably low error. Combined with a T{sub 1}-transformed Hamiltonian, this leads to substantial practical accelerations against an optimized density-fitted CCSD implementation.

11. Least squares regression methods for clustered ROC data with discrete covariates.

PubMed

Tang, Liansheng Larry; Zhang, Wei; Li, Qizhai; Ye, Xuan; Chan, Leighton

2016-07-01

The receiver operating characteristic (ROC) curve is a popular tool to evaluate and compare the accuracy of diagnostic tests to distinguish the diseased group from the nondiseased group when test results from tests are continuous or ordinal. A complicated data setting occurs when multiple tests are measured on abnormal and normal locations from the same subject and the measurements are clustered within the subject. Although least squares regression methods can be used for the estimation of ROC curve from correlated data, how to develop the least squares methods to estimate the ROC curve from the clustered data has not been studied. Also, the statistical properties of the least squares methods under the clustering setting are unknown. In this article, we develop the least squares ROC methods to allow the baseline and link functions to differ, and more importantly, to accommodate clustered data with discrete covariates. The methods can generate smooth ROC curves that satisfy the inherent continuous property of the true underlying curve. The least squares methods are shown to be more efficient than the existing nonparametric ROC methods under appropriate model assumptions in simulation studies. We apply the methods to a real example in the detection of glaucomatous deterioration. We also derive the asymptotic properties of the proposed methods. PMID:26848938

12. Algorithms and architectures for adaptive least squares signal processing, with applications in magnetoencephalography

SciTech Connect

Lewis, P.S.

1988-10-01

Least squares techniques are widely used in adaptive signal processing. While algorithms based on least squares are robust and offer rapid convergence properties, they also tend to be complex and computationally intensive. To enable the use of least squares techniques in real-time applications, it is necessary to develop adaptive algorithms that are efficient and numerically stable, and can be readily implemented in hardware. The first part of this work presents a uniform development of general recursive least squares (RLS) algorithms, and multichannel least squares lattice (LSL) algorithms. RLS algorithms are developed for both direct estimators, in which a desired signal is present, and for mixed estimators, in which no desired signal is available, but the signal-to-data cross-correlation is known. In the second part of this work, new and more flexible techniques of mapping algorithms to array architectures are presented. These techniques, based on the synthesis and manipulation of locally recursive algorithms (LRAs), have evolved from existing data dependence graph-based approaches, but offer the increased flexibility needed to deal with the structural complexities of the RLS and LSL algorithms. Using these techniques, various array architectures are developed for each of the RLS and LSL algorithms and the associated space/time tradeoffs presented. In the final part of this work, the application of these algorithms is demonstrated by their employment in the enhancement of single-trial auditory evoked responses in magnetoencephalography. 118 refs., 49 figs., 36 tabs.

13. Comparison of Kriging and Moving Least Square Methods to Change the Geometry of Human Body Models.

PubMed

Jolivet, Erwan; Lafon, Yoann; Petit, Philippe; Beillas, Philippe

2015-11-01

Finite Element Human Body Models (HBM) have become powerful tools to study the response to impact. However, they are typically only developed for a limited number of sizes and ages. Various approaches driven by control points have been reported in the literature for the non-linear scaling of these HBM into models with different geometrical characteristics. The purpose of this study is to compare the performances of commonly used control points based interpolation methods in different usage scenarios. Performance metrics include the respect of target, the mesh quality and the runability. For this study, the Kriging and Moving Least square interpolation approaches were compared in three test cases. The first two cases correspond to changes of anthropometric dimensions of (1) a child model (from 6 to 1.5 years old) and (2) the GHBMC M50 model (Global Human Body Models Consortium, from 50th to 5th percentile female). For the third case, the GHBMC M50 ribcage was scaled to match the rib cage geometry derived from a CT-scan. In the first two test cases, all tested methods provided similar shapes with acceptable results in terms of time needed for the deformation (a few minutes at most), overall respect of the targets, element quality distribution and time step for explicit simulation. The personalization of rib cage proved to be much more challenging. None of the methods tested provided fully satisfactory results at the level of the rib trajectory and section. There were corrugated local deformations unless using a smooth regression through relaxation. Overall, the results highlight the importance of the target definition over the interpolation method. PMID:26660750

14. The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.

PubMed

Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico

2016-04-01

This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift. PMID:26571523

15. Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time

NASA Technical Reports Server (NTRS)

Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.

1993-01-01

A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.

16. Fruit fly optimization based least square support vector regression for blind image restoration

Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei

2014-11-01

The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and

17. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares.

PubMed

Boccard, Julien; Rudaz, Serge

2016-05-12

Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. PMID:27114219

18. A variant of sparse partial least squares for variable selection and data exploration

PubMed Central

Olson Hunt, Megan J.; Weissfeld, Lisa; Boudreau, Robert M.; Aizenstein, Howard; Newman, Anne B.; Simonsick, Eleanor M.; Van Domelen, Dane R.; Thomas, Fridtjof; Yaffe, Kristine; Rosano, Caterina

2014-01-01

When data are sparse and/or predictors multicollinear, current implementation of sparse partial least squares (SPLS) does not give estimates for non-selected predictors nor provide a measure of inference. In response, an approach termed “all-possible” SPLS is proposed, which fits a SPLS model for all tuning parameter values across a set grid. Noted is the percentage of time a given predictor is chosen, as well as the average non-zero parameter estimate. Using a “large” number of multicollinear predictors, simulation confirmed variables not associated with the outcome were least likely to be chosen as sparsity increased across the grid of tuning parameters, while the opposite was true for those strongly associated. Lastly, variables with a weak association were chosen more often than those with no association, but less often than those with a strong relationship to the outcome. Similarly, predictors most strongly related to the outcome had the largest average parameter estimate magnitude, followed by those with a weak relationship, followed by those with no relationship. Across two independent studies regarding the relationship between volumetric MRI measures and a cognitive test score, this method confirmed a priori hypotheses about which brain regions would be selected most often and have the largest average parameter estimates. In conclusion, the percentage of time a predictor is chosen is a useful measure for ordering the strength of the relationship between the independent and dependent variables, serving as a form of inference. The average parameter estimates give further insight regarding the direction and strength of association. As a result, all-possible SPLS gives more information than the dichotomous output of traditional SPLS, making it useful when undertaking data exploration and hypothesis generation for a large number of potential predictors. PMID:24624079

19. A Two-Layer Least Squares Support Vector Machine Approach to Credit Risk Assessment

Liu, Jingli; Li, Jianping; Xu, Weixuan; Shi, Yong

Least squares support vector machine (LS-SVM) is a revised version of support vector machine (SVM) and has been proved to be a useful tool for pattern recognition. LS-SVM had excellent generalization performance and low computational cost. In this paper, we propose a new method called two-layer least squares support vector machine which combines kernel principle component analysis (KPCA) and linear programming form of least square support vector machine. With this method sparseness and robustness is obtained while solving large dimensional and large scale database. A U.S. commercial credit card database is used to test the efficiency of our method and the result proved to be a satisfactory one.

20. On the equivalence of Kalman filtering and least-squares estimation

Mysen, E.

2016-07-01

The Kalman filter is derived directly from the least-squares estimator, and generalized to accommodate stochastic processes with time variable memory. To complete the link between least-squares estimation and Kalman filtering of first-order Markov processes, a recursive algorithm is presented for the computation of the off-diagonal elements of the a posteriori least-squares error covariance. As a result of the algebraic equivalence of the two estimators, both approaches can fully benefit from the advantages implied by their individual perspectives. In particular, it is shown how Kalman filter solutions can be integrated into the normal equation formalism that is used for intra- and inter-technique combination of space geodetic data.

1. Least-squares reverse-time migration of Cranfield VSP data for monitoring CO2 injection

TAN, S.; Huang, L.

2012-12-01

Cost-effective monitoring for carbon utilization and sequestration requires high-resolution imaging with a minimal amount of data. Least-squares reverse-time migration is a promising imaging method for this purpose. We apply least-squares reverse-time migration to a portion of the 3D vertical seismic profile data acquired at the Cranfield enhanced oil recovery field in Mississippi for monitoring CO2 injection. Conventional reverse-time migration of limited data suffers from significant image artifacts and a poor image resolution. Lease-squares reverse-time migration can reduce image artifacts and improves the image resolution. We demonstrate the significant improvements of least-squares reverse-time migration by comparing its migration images of the Cranfield VSP data with that obtained using the conventional reverse-time migration.

2. [Partial least squares regression variable screening studies on apple soluble solids NIR spectral detection].

PubMed

Ouyang, Ai-Guo; Xie, Xiao-Qiang; Zhou, Yan-Rui; Liu, Yan-De

2012-10-01

Abstract To improve the predictive ability and robustness of the NIR correction model of the soluble solid content (SSC) of apple, the reverse interval partial least squares method, genetic algorithm and the continuous projection method were implemented to select variables of the NIR spectroscopy of the soluble solid content (SSC) of apple, and the partial least squares regression model was established. By genetic algorithm for screening of the 141 variables of the correction model, prediction has the best effect. And compared to the full spectrum correction model, the correlation coefficient increased to 0.96 from 0.93, forecast root mean square error decreased from 0.30 degrees Brix to 0.23 degrees Brix. This experimental results show that the genetic algorithm combined with partial least squares regression method improved the detection precision of the NIR model of the soluble solid content (SSC) of apple. PMID:23285864

3. Taking correlations in GPS least squares adjustments into account with a diagonal covariance matrix

Kermarrec, Gaël; Schön, Steffen

2016-05-01

Based on the results of Luati and Proietti (Ann Inst Stat Math 63:673-686, 2011) on an equivalence for a certain class of polynomial regressions between the diagonally weighted least squares (DWLS) and the generalized least squares (GLS) estimator, an alternative way to take correlations into account thanks to a diagonal covariance matrix is presented. The equivalent covariance matrix is much easier to compute than a diagonalization of the covariance matrix via eigenvalue decomposition which also implies a change of the least squares equations. This condensed matrix, for use in the least squares adjustment, can be seen as a diagonal or reduced version of the original matrix, its elements being simply the sums of the rows elements of the weighting matrix. The least squares results obtained with the equivalent diagonal matrices and those given by the fully populated covariance matrix are mathematically strictly equivalent for the mean estimator in terms of estimate and its a priori cofactor matrix. It is shown that this equivalence can be empirically extended to further classes of design matrices such as those used in GPS positioning (single point positioning, precise point positioning or relative positioning with double differences). Applying this new model to simulated time series of correlated observations, a significant reduction of the coordinate differences compared with the solutions computed with the commonly used diagonal elevation-dependent model was reached for the GPS relative positioning with double differences, single point positioning as well as precise point positioning cases. The estimate differences between the equivalent and classical model with fully populated covariance matrix were below the mm for all simulated GPS cases and below the sub-mm for the relative positioning with double differences. These results were confirmed by analyzing real data. Consequently, the equivalent diagonal covariance matrices, compared with the often used elevation

4. Inversion Theorem Based Kernel Density Estimation for the Ordinary Least Squares Estimator of a Regression Coefficient

PubMed Central

Wang, Dongliang; Hutson, Alan D.

2016-01-01

The traditional confidence interval associated with the ordinary least squares estimator of linear regression coefficient is sensitive to non-normality of the underlying distribution. In this article, we develop a novel kernel density estimator for the ordinary least squares estimator via utilizing well-defined inversion based kernel smoothing techniques in order to estimate the conditional probability density distribution of the dependent random variable. Simulation results show that given a small sample size, our method significantly increases the power as compared with Wald-type CIs. The proposed approach is illustrated via an application to a classic small data set originally from Graybill (1961). PMID:26924882

5. Least square neural network model of the crude oil blending process.

PubMed

Rubio, José de Jesús

2016-06-01

In this paper, the recursive least square algorithm is designed for the big data learning of a feedforward neural network. The proposed method as the combination of the recursive least square and feedforward neural network obtains four advantages over the alone algorithms: it requires less number of regressors, it is fast, it has the learning ability, and it is more compact. Stability, convergence, boundedness of parameters, and local minimum avoidance of the proposed technique are guaranteed. The introduced strategy is applied for the modeling of the crude oil blending process. PMID:26992706

6. MLAMBDA: a modified LAMBDA method for integer least-squares estimation

Chang, X.-W.; Yang, X.; Zhou, T.

2005-12-01

The least-squares ambiguity Decorrelation (LAMBDA) method has been widely used in GNSS for fixing integer ambiguities. It can also solve any integer least squares (ILS) problem arising from other applications. For real time applications with high dimensions, the computational speed is crucial. A modified LAMBDA (MLAMBDA) method is presented. Several strategies are proposed to reduce the computational complexity of the LAMBDA method. Numerical simulations show that MLAMBDA is (much) faster than LAMBDA. The relations between the LAMBDA method and some relevant methods in the information theory literature are pointed out when we introduce its main procedures.

7. Landsat-4 (TDRSS-user) orbit determination using batch least-squares and sequential methods

NASA Technical Reports Server (NTRS)

Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

1992-01-01

TDRSS user orbit determination is analyzed using a batch least-squares method and a sequential estimation method. It was found that in the batch least-squares method analysis, the orbit determination consistency for Landsat-4, which was heavily tracked by TDRSS during January 1991, was about 4 meters in the rms overlap comparisons and about 6 meters in the maximum position differences in overlap comparisons. The consistency was about 10 to 30 meters in the 3 sigma state error covariance function in the sequential method analysis. As a measure of consistency, the first residual of each pass was within the 3 sigma bound in the residual space.

8. Explicit least squares system parameter identification for exact differential input/output models

NASA Technical Reports Server (NTRS)

Pearson, A. E.

1993-01-01

The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.

9. Least-squares streamline diffusion finite element approximations to singularly perturbed convection-diffusion problems

SciTech Connect

Lazarov, R D; Vassilevski, P S

1999-05-06

In this paper we introduce and study a least-squares finite element approximation for singularly perturbed convection-diffusion equations of second order. By introducing the flux (diffusive plus convective) as a new unknown, the problem is written in a mixed form as a first order system. Further, the flux is augmented by adding the lower order terms with a small parameter. The new first order system is approximated by the least-squares finite element method using the minus one norm approach of Bramble, Lazarov, and Pasciak [2]. Further, we estimate the error of the method and discuss its implementation and the numerical solution of some test problems.

10. Least-Squares Approximation of an Improper by a Proper Correlation Matrix Using a Semi-Infinite Convex Program. Research Report 87-7.

ERIC Educational Resources Information Center

Knol, Dirk L.; ten Berge, Jos M. F.

An algorithm is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. The proposed algorithm is based on a solution for C. I. Mosier's oblique Procrustes rotation problem offered by J. M. F. ten Berge and K. Nevels (1977). It is shown that the minimization problem…

11. Least squares estimation of Generalized Space Time AutoRegressive (GSTAR) model and its properties

Ruchjana, Budi Nurani; Borovkova, Svetlana A.; Lopuhaa, H. P.

2012-05-01

In this paper we studied a least squares estimation parameters of the Generalized Space Time AutoRegressive (GSTAR) model and its properties, especially in consistency and asymptotic normality. We use R software to estimate the GSTAR parameter and apply the model toward real phenomena of data, such as an oil production data at volcanic layer.

12. ON ASYMPTOTIC DISTRIBUTION AND ASYMPTOTIC EFFICIENCY OF LEAST SQUARES ESTIMATORS OF SPATIAL VARIOGRAM PARAMETERS. (R827257)

EPA Science Inventory

## Abstract

In this article, we consider the least-squares approach for estimating parameters of a spatial variogram and establish consistency and asymptotic normality of these estimators under general conditions. Large-sample distributions are also established under a sp...

13. Assessing Compliance-Effect Bias in the Two Stage Least Squares Estimator

ERIC Educational Resources Information Center

Reardon, Sean; Unlu, Fatih; Zhu, Pei; Bloom, Howard

2011-01-01

The proposed paper studies the bias in the two-stage least squares, or 2SLS, estimator that is caused by the compliance-effect covariance (hereafter, the compliance-effect bias). It starts by deriving the formula for the bias in an infinite sample (i.e., in the absence of finite sample bias) under different circumstances. Specifically, it…

14. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

ERIC Educational Resources Information Center

Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

2010-01-01

This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile intervals, and…

15. An Alternating Least Squares Method for the Weighted Approximation of a Symmetric Matrix.

ERIC Educational Resources Information Center

ten Berge, Jos M. F.; Kiers, Henk A. L.

1993-01-01

R. A. Bailey and J. C. Gower explored approximating a symmetric matrix "B" by another, "C," in the least squares sense when the squared discrepancies for diagonal elements receive specific nonunit weights. A solution is proposed where "C" is constrained to be positive semidefinite and of a fixed rank. (SLD)

16. Using Technology to Optimize and Generalize: The Least-Squares Line

ERIC Educational Resources Information Center

Burke, Maurice J.; Hodgson, Ted R.

2007-01-01

With the help of technology and a basic high school algebra method for finding the vertex of a quadratic polynomial, students can develop and prove the formula for least-squares lines. Students are exposed to the power of a computer algebra system to generalize processes they understand and to see deeper patterns in those processes. (Contains 4…

17. Interpreting the Results of Weighted Least-Squares Regression: Caveats for the Statistical Consumer.

ERIC Educational Resources Information Center

Willett, John B.; Singer, Judith D.

In research, data sets often occur in which the variance of the distribution of the dependent variable at given levels of the predictors is a function of the values of the predictors. In this situation, the use of weighted least-squares (WLS) or techniques is required. Weights suitable for use in a WLS regression analysis must be estimated. A…

18. Noise suppression using preconditioned least-squares prestack time migration: application to the Mississippian limestone

Guo, Shiguang; Zhang, Bo; Wang, Qing; Cabrales-Vargas, Alejandro; Marfurt, Kurt J.

2016-08-01

Conventional Kirchhoff migration often suffers from artifacts such as aliasing and acquisition footprint, which come from sub-optimal seismic acquisition. The footprint can mask faults and fractures, while aliased noise can focus into false coherent events which affect interpretation and contaminate amplitude variation with offset, amplitude variation with azimuth and elastic inversion. Preconditioned least-squares migration minimizes these artifacts. We implement least-squares migration by minimizing the difference between the original data and the modeled demigrated data using an iterative conjugate gradient scheme. Unpreconditioned least-squares migration better estimates the subsurface amplitude, but does not suppress aliasing. In this work, we precondition the results by applying a 3D prestack structure-oriented LUM (lower–upper–middle) filter to each common offset and common azimuth gather at each iteration. The preconditioning algorithm not only suppresses aliasing of both signal and noise, but also improves the convergence rate. We apply the new preconditioned least-squares migration to the Marmousi model and demonstrate how it can improve the seismic image compared with conventional migration, and then apply it to one survey acquired over a new resource play in the Mid-Continent, USA. The acquisition footprint from the targets is attenuated and the signal to noise ratio is enhanced. To demonstrate the impact on interpretation, we generate a suite of seismic attributes to image the Mississippian limestone, and show that the karst-enhanced fractures in the Mississippian limestone can be better illuminated.

19. SAS MACRO LANGUAGE PROGRAM FOR PARTIAL LEAST SQUARES REGRESSION OF SPECTRAL DATA

Technology Transfer Automated Retrieval System (TEKTRAN)

A computer program was written in the SAS language for the purpose of examining the effect of spectral pretreatments on partial least squares regression of near-infrared (or similarly structured) data. The program operates in an unattended batch mode, in which the user may specify a number of commo...

20. Revisiting the Least-squares Procedure for Gradient Reconstruction on Unstructured Meshes

NASA Technical Reports Server (NTRS)

Mavriplis, Dimitri J.; Thomas, James L. (Technical Monitor)

2003-01-01

The accuracy of the least-squares technique for gradient reconstruction on unstructured meshes is examined. While least-squares techniques produce accurate results on arbitrary isotropic unstructured meshes, serious difficulties exist for highly stretched meshes in the presence of surface curvature. In these situations, gradients are typically under-estimated by up to an order of magnitude. For vertex-based discretizations on triangular and quadrilateral meshes, and cell-centered discretizations on quadrilateral meshes, accuracy can be recovered using an inverse distance weighting in the least-squares construction. For cell-centered discretizations on triangles, both the unweighted and weighted least-squares constructions fail to provide suitable gradient estimates for highly stretched curved meshes. Good overall flow solution accuracy can be retained in spite of poor gradient estimates, due to the presence of flow alignment in exactly the same regions where the poor gradient accuracy is observed. However, the use of entropy fixes has the potential for generating large but subtle discretization errors.

1. A Geometric Analysis of when Fixed Weighting Schemes Will Outperform Ordinary Least Squares

ERIC Educational Resources Information Center

Davis-Stober, Clintin P.

2011-01-01

Many researchers have demonstrated that fixed, exogenously chosen weights can be useful alternatives to Ordinary Least Squares (OLS) estimation within the linear model (e.g., Dawes, Am. Psychol. 34:571-582, 1979; Einhorn & Hogarth, Org. Behav. Human Perform. 13:171-192, 1975; Wainer, Psychol. Bull. 83:213-217, 1976). Generalizing the approach of…

2. Linking Socioeconomic Status to Social Cognitive Career Theory Factors: A Partial Least Squares Path Modeling Analysis

ERIC Educational Resources Information Center

Huang, Jie-Tsuen; Hsieh, Hui-Hsien

2011-01-01

The purpose of this study was to investigate the contributions of socioeconomic status (SES) in predicting social cognitive career theory (SCCT) factors. Data were collected from 738 college students in Taiwan. The results of the partial least squares (PLS) analyses indicated that SES significantly predicted career decision self-efficacy (CDSE);…

3. A negative-norm least squares method for Reissner-Mindlin plates

Bramble, J. H.; Sun, T.

1998-07-01

In this paper a least squares method, using the minus one norm developed by Bramble, Lazarov, and Pasciak, is introduced to approximate the solution of the Reissner-Mindlin plate problem with small parameter t, the thickness of the plate. The reformulation of Brezzi and Fortin is employed to prevent locking. Taking advantage of the least squares approach, we use only continuous finite elements for all the unknowns. In particular, we may use continuous linear finite elements. The difficulty of satisfying the inf-sup condition is overcome by the introduction of a stabilization term into the least squares bilinear form, which is very cheap computationally. It is proved that the error of the discrete solution is optimal with respect to regularity and uniform with respect to the parameter t. Apart from the simplicity of the elements, the stability theorem gives a natural block diagonal preconditioner of the resulting least squares system. For each diagonal block, one only needs a preconditioner for a second order elliptic problem.

4. Cross-term free based bistatic radar system using sparse least squares

Sevimli, R. Akin; Cetin, A. Enis

2015-05-01

Passive Bistatic Radar (PBR) systems use illuminators of opportunity, such as FM, TV, and DAB broadcasts. The most common illuminator of opportunity used in PBR systems is the FM radio stations. Single FM channel based PBR systems do not have high range resolution and may turn out to be noisy. In order to enhance the range resolution of the PBR systems algorithms using several FM channels at the same time are proposed. In standard methods, consecutive FM channels are translated to baseband as is and fed to the matched filter to compute the range-Doppler map. Multichannel FM based PBR systems have better range resolution than single channel systems. However superious sidelobe peaks occur as a side effect. In this article, we linearly predict the surveillance signal using the modulated and delayed reference signal components. We vary the modulation frequency and the delay to cover the entire range-Doppler plane. Whenever there is a target at a specific range value and Doppler value the prediction error is minimized. The cost function of the linear prediction equation has three components. The first term is the real-part of the ordinary least squares term, the second-term is the imaginary part of the least squares and the third component is the l2-norm of the prediction coefficients. Separate minimization of real and imaginary parts reduces the side lobes and decrease the noise level of the range-Doppler map. The third term enforces the sparse solution on the least squares problem. We experimentally observed that this approach is better than both the standard least squares and other sparse least squares approaches in terms of side lobes. Extensive simulation examples will be presented in the final form of the paper.

5. A least-squares minimisation approach to depth determination from numerical second horizontal self-potential anomalies

Abdelrahman, El-Sayed Mohamed; Soliman, Khalid; Essa, Khalid Sayed; Abo-Ezz, Eid Ragab; El-Araby, Tarek Mohamed

2009-06-01

This paper develops a least-squares minimisation approach to determine the depth of a buried structure from numerical second horizontal derivative anomalies obtained from self-potential (SP) data using filters of successive window lengths. The method is based on using a relationship between the depth and a combination of observations at symmetric points with respect to the coordinate of the projection of the centre of the source in the plane of the measurement points with a free parameter (graticule spacing). The problem of depth determination from second derivative SP anomalies has been transformed into the problem of finding a solution to a non-linear equation of the form f(z)=0. Formulas have been derived for horizontal cylinders, spheres, and vertical cylinders. Procedures are also formulated to determine the electric dipole moment and the polarization angle. The proposed method was tested on synthetic noisy and real SP data. In the case of the synthetic data, the least-squares method determined the correct depths of the sources. In the case of practical data (SP anomalies over a sulfide ore deposit, Sariyer, Turkey and over a Malachite Mine, Jefferson County, Colorado, USA), the estimated depths of the buried structures are in good agreement with the results obtained from drilling and surface geology.

6. Inline Measurement of Particle Concentrations in Multicomponent Suspensions using Ultrasonic Sensor and Least Squares Support Vector Machines

PubMed Central

Zhan, Xiaobin; Jiang, Shulan; Yang, Yili; Liang, Jian; Shi, Tielin; Li, Xiwen

2015-01-01

This paper proposes an ultrasonic measurement system based on least squares support vector machines (LS-SVM) for inline measurement of particle concentrations in multicomponent suspensions. Firstly, the ultrasonic signals are analyzed and processed, and the optimal feature subset that contributes to the best model performance is selected based on the importance of features. Secondly, the LS-SVM model is tuned, trained and tested with different feature subsets to obtain the optimal model. In addition, a comparison is made between the partial least square (PLS) model and the LS-SVM model. Finally, the optimal LS-SVM model with the optimal feature subset is applied to inline measurement of particle concentrations in the mixing process. The results show that the proposed method is reliable and accurate for inline measuring the particle concentrations in multicomponent suspensions and the measurement accuracy is sufficiently high for industrial application. Furthermore, the proposed method is applicable to the modeling of the nonlinear system dynamically and provides a feasible way to monitor industrial processes. PMID:26393611

7. Detection of Glutamic Acid in Oilseed Rape Leaves Using Near Infrared Spectroscopy and the Least Squares-Support Vector Machine

PubMed Central

Bao, Yidan; Kong, Wenwen; Liu, Fei; Qiu, Zhengjun; He, Yong

2012-01-01

Amino acids are quite important indices to indicate the growth status of oilseed rape under herbicide stress. Near infrared (NIR) spectroscopy combined with chemometrics was applied for fast determination of glutamic acid in oilseed rape leaves. The optimal spectral preprocessing method was obtained after comparing Savitzky-Golay smoothing, standard normal variate, multiplicative scatter correction, first and second derivatives, detrending and direct orthogonal signal correction. Linear and nonlinear calibration methods were developed, including partial least squares (PLS) and least squares-support vector machine (LS-SVM). The most effective wavelengths (EWs) were determined by the successive projections algorithm (SPA), and these wavelengths were used as the inputs of PLS and LS-SVM model. The best prediction results were achieved by SPA-LS-SVM (Raw) model with correlation coefficient r = 0.9943 and root mean squares error of prediction (RMSEP) = 0.0569 for prediction set. These results indicated that NIR spectroscopy combined with SPA-LS-SVM was feasible for the fast and effective detection of glutamic acid in oilseed rape leaves. The selected EWs could be used to develop spectral sensors, and the important and basic amino acid data were helpful to study the function mechanism of herbicide. PMID:23203052

8. Kinetic approach for the enzymatic determination of levodopa and carbidopa assisted by multivariate curve resolution-alternating least squares.

PubMed

Grünhut, Marcos; Garrido, Mariano; Centurión, Maria E; Fernández Band, Beatriz S

2010-07-12

A combination of kinetic spectroscopic monitoring and multivariate curve resolution-alternating least squares (MCR-ALS) was proposed for the enzymatic determination of levodopa (LVD) and carbidopa (CBD) in pharmaceuticals. The enzymatic reaction process was carried out in a reverse stopped-flow injection system and monitored by UV-vis spectroscopy. The spectra (292-600 nm) were recorded throughout the reaction and were analyzed by multivariate curve resolution-alternating least squares. A small calibration matrix containing nine mixtures was used in the model construction. Additionally, to evaluate the prediction ability of the model, a set with six validation mixtures was used. The lack of fit obtained was 4.3%, the explained variance 99.8% and the overall prediction error 5.5%. Tablets of commercial samples were analyzed and the results were validated by pharmacopeia method (high performance liquid chromatography). No significant differences were found (alpha=0.05) between the reference values and the ones obtained with the proposed method. It is important to note that a unique chemometric model made it possible to determine both analytes simultaneously. PMID:20630175

9. Weighted linear least squares problem: an interval analysis approach to rank determination

SciTech Connect

Manteuffel, T. A.

1980-08-01

This is an extension of the work in SAND--80-0655 to the weighted linear least squares problem. Given the weighted linear least squares problem WAx approx. = Wb, where W is a diagonal weighting matrix, and bounds on the uncertainty in the elements of A, we define an interval matrix A/sup I/ that contains all perturbations of A due to these uncertainties and say that the problem is rank deficient if any member of A/sup I/ is rank deficient. It is shown that, if WA = QR is the QR decomposition of WA, then Q and R/sup -1/ can be used to bound the rank of A/sup I/. A modification of the Modified Gram--Schmidt QR decomposition yields an algorithm that implements these results. The extra arithmetic is 0(MN). Numerical results show the algorithm to be effective on problems in which the weights vary greatly in magnitude.

10. Partial least squares regression on DCT domain for infrared face recognition

Xie, Zhihua

2014-09-01

Compact and discriminative feature extraction is a challenging task for infrared face recognition. In this paper, we propose an infrared face recognition method using Partial Least Square (PLS) regression on Discrete Cosine Transform (DCT) coefficients. With the strong ability for data de-correlation and compact energy, DCT is studied to get the compact features in infrared face. To dig out discriminative information in DCT coefficients, class-specific One-to-Rest Partial Least Squares (PLS) classifier is learned for accurate classification. The infrared data were collected by an infrared camera Thermo Vision A40 supplied by FLIR Systems Inc. The experimental results show that the recognition rate of the proposed algorithm can reach 95.8%, outperforms that of the state of art infrared face recognition methods based on Linear Discriminant Analysis (LDA) and DCT.

NASA Technical Reports Server (NTRS)

Ferrier, Yvonne L.; Nguyen, Nhan T.; Ting, Eric

2016-01-01

This paper contains a simulation study of a real-time adaptive least-squares drag minimization algorithm for an aeroelastic model of a flexible wing aircraft. The aircraft model is based on the NASA Generic Transport Model (GTM). The wing structures incorporate a novel aerodynamic control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF). The drag minimization algorithm uses the Newton-Raphson method to find the optimal VCCTEF deflections for minimum drag in the context of an altitude-hold flight control mode at cruise conditions. The aerodynamic coefficient parameters used in this optimization method are identified in real-time using Recursive Least Squares (RLS). The results demonstrate the potential of the VCCTEF to improve aerodynamic efficiency for drag minimization for transport aircraft.

12. Time-Series INSAR: An Integer Least-Squares Approach For Distributed Scatterers

Samiei-Esfahany, Sami; Hanssen, Ramon F.

2012-01-01

The objective of this research is to extend the geode- tic mathematical model which was developed for persistent scatterers to a model which can exploit distributed scatterers (DS). The main focus is on the integer least- squares framework, and the main challenge is to include the decorrelation effect in the mathematical model. In order to adapt the integer least-squares mathematical model for DS we altered the model from a single master to a multi-master configuration and introduced the decorrelation effect stochastically. This effect is described in our model by a full covariance matrix. We propose to de- rive this covariance matrix by numerical integration of the (joint) probability distribution function (PDF) of interferometric phases. This PDF is a function of coherence values and can be directly computed from radar data. We show that the use of this model can improve the performance of temporal phase unwrapping of distributed scatterers.

13. Quantitative Analysis of Isotope Distributions In Proteomic Mass Spectrometry Using Least-Squares Fourier Transform Convolution

PubMed Central

Sperling, Edit; Bunner, Anne E.; Sykes, Michael T.; Williamson, James R.

2008-01-01

Quantitative proteomic mass spectrometry involves comparison of the amplitudes of peaks resulting from different isotope labeling patterns, including fractional atomic labeling and fractional residue labeling. We have developed a general and flexible analytical treatment of the complex isotope distributions that arise in these experiments, using Fourier transform convolution to calculate labeled isotope distributions and least-squares for quantitative comparison with experimental peaks. The degree of fractional atomic and fractional residue labeling can be determined from experimental peaks at the same time as the integrated intensity of all of the isotopomers in the isotope distribution. The approach is illustrated using data with fractional 15N-labeling and fractional 13C-isoleucine labeling. The least-squares Fourier transform convolution approach can be applied to many types of quantitive proteomic data, including data from stable isotope labeling by amino acids in cell culture and pulse labeling experiments. PMID:18522437

14. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing

2015-06-01

In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

15. On least squares approximations to indefinite problems of the mixed type

NASA Technical Reports Server (NTRS)

Fix, G. J.; Gunzburger, M. D.

1978-01-01

A least squares method is presented for computing approximate solutions of indefinite partial differential equations of the mixed type such as those that arise in connection with transonic flutter analysis. The method retains the advantages of finite difference schemes namely simplicity and sparsity of the resulting matrix system. However, it offers some great advantages over finite difference schemes. First, the method is insensitive to the value of the forcing frequency, i.e., the resulting matrix system is always symmetric and positive definite. As a result, iterative methods may be successfully employed to solve the matrix system, thus taking full advantage of the sparsity. Furthermore, the method is insensitive to the type of the partial differential equation, i.e., the computational algorithm is the same in elliptic and hyperbolic regions. In this work the method is formulated and numerical results for model problems are presented. Some theoretical aspects of least squares approximations are also discussed.

16. Least squares finite element method with high continuity NURBS basis for incompressible Navier-Stokes equations

Chen, De-Xiang; Xu, Zi-Li; Liu, Shi; Feng, Yong-Xin

2014-03-01

Modern least squares finite element method (LSFEM) has advantage over mixed finite element method for non-self-adjoint problem like Navier-Stokes equations, but has problem to be norm equivalent and mass conservative when using C0 type basis. In this paper, LSFEM with non-uniform B-splines (NURBS) is proposed for Navier-Stokes equations. High order continuity NURBS is used to construct the finite-dimensional spaces for both velocity and pressure. Variational form is derived from the governing equations with primitive variables and the DOFs due to additional variables are not necessary. There is a novel k-refinement which has spectral convergence of least squares functional. The method also has same advantages as isogeometric analysis like automatic mesh generation and exact geometry representation. Several benchmark problems are solved using the proposed method. The results agree well with the benchmark solutions available in literature. The results also show good performance in mass conservation.

17. Geodesic least squares regression for scaling studies in magnetic confinement fusion

SciTech Connect

Verdoolaege, Geert

2015-01-13

In regression analyses for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. However, concerns have been raised with respect to several assumptions underlying OLS in its application to scaling laws. We here discuss a new regression method that is robust in the presence of significant uncertainty on both the data and the regression model. The method, which we call geodesic least squares regression (GLS), is based on minimization of the Rao geodesic distance on a probabilistic manifold. We demonstrate the superiority of the method using synthetic data and we present an application to the scaling law for the power threshold for the transition to the high confinement regime in magnetic confinement fusion devices.

18. Accuracy of least-squares methods for the Navier-Stokes equations

NASA Technical Reports Server (NTRS)

Bochev, Pavel B.; Gunzburger, Max D.

1993-01-01

Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.

19. L2CXCV: A Fortran 77 package for least squares convex/concave data smoothing

Demetriou, I. C.

2006-04-01

Fortran 77 software is given for least squares smoothing to data values contaminated by random errors subject to one sign change in the second divided differences of the smoothed values, where the location of the sign change is also unknown of the optimization problem. A highly useful description of the constraints is that they follow from the assumption of initially increasing and subsequently decreasing rates of change, or vice versa, of the process considered. The underlying algorithm partitions the data into two disjoint sets of adjacent data and calculates the required fit by solving a strictly convex quadratic programming problem for each set. The piecewise linear interpolant to the fit is convex on the first set and concave on the other one. The partition into suitable sets is achieved by a finite iterative algorithm, which is made quite efficient because of the interactions of the quadratic programming problems on consecutive data. The algorithm obtains the solution by employing no more quadratic programming calculations over subranges of data than twice the number of the divided differences constraints. The quadratic programming technique makes use of active sets and takes advantage of a B-spline representation of the smoothed values that allows some efficient updating procedures. The entire code required to implement the method is 2920 Fortran lines. The package has been tested on a variety of data sets and it has performed very efficiently, terminating in an overall number of active set changes over subranges of data that is only proportional to the number of data. The results suggest that the package can be used for very large numbers of data values. Some examples with output are provided to help new users and exhibit certain features of the software. Important applications of the smoothing technique may be found in calculating a sigmoid approximation, which is a common topic in various contexts in applications in disciplines like physics, economics

20. Model updating of rotor systems by using Nonlinear least square optimization

Jha, A. K.; Dewangan, P.; Sarangi, M.

2016-07-01

Mathematical models of structure or machineries are always different from the existing physical system, because the approach of numerical predictions to the behavior of a physical system is limited by the assumptions used in the development of the mathematical model. Model updating is, therefore necessary so that updated model should replicate the physical system. This work focuses on the model updating of rotor systems at various speeds as well as at different modes of vibration. Support bearing characteristics severely influence the dynamics of rotor systems like turbines, compressors, pumps, electrical machines, machine tool spindles etc. Therefore bearing parameters (stiffness and damping) are considered to be updating parameters. A finite element model of rotor systems is developed using Timoshenko beam element. Unbalance response in time domain and frequency response function have been calculated by numerical techniques, and compared with the experimental data to update the FE-model of rotor systems. An algorithm, based on unbalance response in time domain is proposed for updating the rotor systems at different running speeds of rotor. An attempt has been made to define Unbalance response assurance criterion (URAC) to check the degree of correlation between updated FE model and physical model.

1. Least squares adjustment of large-scale geodetic networks by orthogonal decomposition

SciTech Connect

George, J.A.; Golub, G.H.; Heath, M.T.; Plemmons, R.J.

1981-11-01

This article reviews some recent developments in the solution of large sparse least squares problems typical of those arising in geodetic adjustment problems. The new methods are distinguished by their use of orthogonal transformations which tend to improve numerical accuracy over the conventional approach based on the use of the normal equations. The adaptation of these new schemes to allow for the use of auxiliary storage and their extension to rank deficient problems are also described.

2. Least squares algorithm for region-of-interest evaluation in emission tomography

SciTech Connect

Formiconi, A.R. . Dipt. di Fisiopatologia Clinica)

1993-03-01

In a simulation study, the performances of the least squares algorithm applied to region-of-interest evaluation were studied. The least squares algorithm is a direct algorithm which does not require any iterative computation scheme and also provides estimates of statistical uncertainties of the region-of-interest values (covariance matrix). A model of physical factors, such as system resolution, attenuation and scatter, can be specified in the algorithm. In this paper an accurate model of the non-stationary geometrical response of a camera-collimator system was considered. The algorithm was compared with three others which are specialized for region-of-interest evaluation, as well as with the conventional method of summing the reconstructed quantity over the regions of interest. For the latter method, two algorithms were used for image reconstruction; these included filtered back projection and conjugate gradient least squares with the model of nonstationary geometrical response. For noise-free data and for regions of accurate shape least squares estimates were unbiased within roundoff errors. For noisy data, estimates were still unbiased but precision worsened for regions smaller than resolution: simulating typical statistics of brain perfusion studies performed with a collimated camera, the estimated standard deviation for a 1 cm square region was 10% with an ultra high-resolution collimator and 7% with a low energy all purpose collimator. Conventional region-of-interest estimates showed comparable precision but were heavily biased if filtered back projection was employed for image reconstruction. Using the conjugate gradient iterative algorithm and the model of nonstationary geometrical response, bias of estimates decreased on increasing the number of iterations, but precision worsened thus achieving an estimated standard deviation of more than 25% for the same 1 cm region.

3. Uncertainty analysis of pollutant build-up modelling based on a Bayesian weighted least squares approach.

PubMed

2013-04-01

Reliable pollutant build-up prediction plays a critical role in the accuracy of urban stormwater quality modelling outcomes. However, water quality data collection is resource demanding compared to streamflow data monitoring, where a greater quantity of data is generally available. Consequently, available water quality datasets span only relatively short time scales unlike water quantity data. Therefore, the ability to take due consideration of the variability associated with pollutant processes and natural phenomena is constrained. This in turn gives rise to uncertainty in the modelling outcomes as research has shown that pollutant loadings on catchment surfaces and rainfall within an area can vary considerably over space and time scales. Therefore, the assessment of model uncertainty is an essential element of informed decision making in urban stormwater management. This paper presents the application of a range of regression approaches such as ordinary least squares regression, weighted least squares regression and Bayesian weighted least squares regression for the estimation of uncertainty associated with pollutant build-up prediction using limited datasets. The study outcomes confirmed that the use of ordinary least squares regression with fixed model inputs and limited observational data may not provide realistic estimates. The stochastic nature of the dependent and independent variables need to be taken into consideration in pollutant build-up prediction. It was found that the use of the Bayesian approach along with the Monte Carlo simulation technique provides a powerful tool, which attempts to make the best use of the available knowledge in prediction and thereby presents a practical solution to counteract the limitations which are otherwise imposed on water quality modelling. PMID:23454702

4. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

DOEpatents

Van Benthem, Mark H.; Keenan, Michael R.

2008-11-11

A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

5. The incomplete inverse and its applications to the linear least squares problem

NASA Technical Reports Server (NTRS)

Morduch, G. E.

1977-01-01

A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.

6. Mobile Location Using Improved Covariance Shaping Least-Squares Estimation in Cellular Systems

Chang, Ann-Chen; Lee, Yu-Hong

This Letter deals with the problem of non-line-of-sight (NLOS) in cellular systems devoted to location purposes. In conjugation with a variable loading technique, we present an efficient technique to make covariance shaping least squares estimator has robust capabilities against the NLOS effects. Compared with other methods, the proposed improved estimator has high accuracy under white Gaussian measurement noises and NLOS effects.

7. Path model analyzed with ordinary least squares multiple regression versus LISREL.

PubMed

Kline, T J; Klammer, J D

2001-03-01

The data of a specified path model using the variables of voice, perceived organizational support, being heard, and procedural justice were subjected to the two separate structural equation modeling analytic techniques--that of ordinary least squares regression and LISREL. A comparison of the results and differences between the analyses is discussed, with the LISREL approach being stronger from both theoretical and statistical perspectives. PMID:11403343

8. Preprocessing in Matlab Inconsistent Linear System for a Meaningful Least Squares Solution

NASA Technical Reports Server (NTRS)

Sen, Symal K.; Shaykhian, Gholam Ali

2011-01-01

Mathematical models of many physical/statistical problems are systems of linear equations Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the . minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.

9. Partial Least Squares (PLS) methods for neuroimaging: a tutorial and review.

PubMed

Krishnan, Anjali; Williams, Lynne J; McIntosh, Anthony Randal; Abdi, Hervé

2011-05-15

Partial Least Squares (PLS) methods are particularly suited to the analysis of relationships between measures of brain activity and of behavior or experimental design. In neuroimaging, PLS refers to two related methods: (1) symmetric PLS or Partial Least Squares Correlation (PLSC), and (2) asymmetric PLS or Partial Least Squares Regression (PLSR). The most popular (by far) version of PLS for neuroimaging is PLSC. It exists in several varieties based on the type of data that are related to brain activity: behavior PLSC analyzes the relationship between brain activity and behavioral data, task PLSC analyzes how brain activity relates to pre-defined categories or experimental design, seed PLSC analyzes the pattern of connectivity between brain regions, and multi-block or multi-table PLSC integrates one or more of these varieties in a common analysis. PLSR, in contrast to PLSC, is a predictive technique which, typically, predicts behavior (or design) from brain activity. For both PLS methods, statistical inferences are implemented using cross-validation techniques to identify significant patterns of voxel activation. This paper presents both PLS methods and illustrates them with small numerical examples and typical applications in neuroimaging. PMID:20656037

10. Local Least Squares Spectral Filtering and Combination by Harmonic Functions on the Sphere

Sjöberg, L.

2011-01-01

Least squares spectral combination is a well-known technique in physical geodesy. The established technique either suffers from the assumption of no correlations of errors between degrees or from a global optimisation of the variance or mean square error of the estimator. Today Earth gravitational models are available together with their full covariance matrices to rather high degrees, extra information that should be properly taken care of. Here we derive the local least squares spectral filter for a stochastic function on the sphere based on the spectral representation of the observable and its error covariance matrix. Second, the spectral combination of two erroneous harmonic series is derived based on their full covariance matrices. In both cases the transition from spectral representation of an estimator to an integral representation is demonstrated. Practical examples are given. Taking advantage of the full covariance matrices in the spectral combination implies a huge computational burden in determining the least squares filters and combinations for high-degree spherical harmonic series. A reasonable compromise between accuracy of estimator and workload could be to consider only one weight parameter/degree, yielding the optimum filtering and combination of Laplace series.

11. Semi-supervised least squares support vector machine algorithm: application to offshore oil reservoir

Luo, Wei-Ping; Li, Hong-Qi; Shi, Ning

2016-06-01

At the early stages of deep-water oil exploration and development, fewer and further apart wells are drilled than in onshore oilfields. Supervised least squares support vector machine algorithms are used to predict the reservoir parameters but the prediction accuracy is low. We combined the least squares support vector machine (LSSVM) algorithm with semi-supervised learning and established a semi-supervised regression model, which we call the semi-supervised least squares support vector machine (SLSSVM) model. The iterative matrix inversion is also introduced to improve the training ability and training time of the model. We use the UCI data to test the generalization of a semi-supervised and a supervised LSSVM models. The test results suggest that the generalization performance of the LSSVM model greatly improves and with decreasing training samples the generalization performance is better. Moreover, for small-sample models, the SLSSVM method has higher precision than the semi-supervised K-nearest neighbor (SKNN) method. The new semisupervised LSSVM algorithm was used to predict the distribution of porosity and sandstone in the Jingzhou study area.

12. Theoretical study of the incompressible Navier-Stokes equations by the least-squares method

NASA Technical Reports Server (NTRS)

Jiang, Bo-Nan; Loh, Ching Y.; Povinelli, Louis A.

1994-01-01

Usually the theoretical analysis of the Navier-Stokes equations is conducted via the Galerkin method which leads to difficult saddle-point problems. This paper demonstrates that the least-squares method is a useful alternative tool for the theoretical study of partial differential equations since it leads to minimization problems which can often be treated by an elementary technique. The principal part of the Navier-Stokes equations in the first-order velocity-pressure-vorticity formulation consists of two div-curl systems, so the three-dimensional div-curl system is thoroughly studied at first. By introducing a dummy variable and by using the least-squares method, this paper shows that the div-curl system is properly determined and elliptic, and has a unique solution. The same technique then is employed to prove that the Stokes equations are properly determined and elliptic, and that four boundary conditions on a fixed boundary are required for three-dimensional problems. This paper also shows that under four combinations of non-standard boundary conditions the solution of the Stokes equations is unique. This paper emphasizes the application of the least-squares method and the div-curl method to derive a high-order version of differential equations and additional boundary conditions. In this paper, an elementary method (integration by parts) is used to prove Friedrichs' inequalities related to the div and curl operators which play an essential role in the analysis.

13. Least-squares electromagnetic analysis of thin dielectrics using surface equivalence

Shieh, Kuen-Wey

2000-10-01

In this thesis, the motivation was to study the applicability and test the limits of analytical formulations using surface equivalence, in dealing with the scattering problem of a thin dielectric slab of finite extent. In this application of the surface equivalence principle, the unknowns, equivalent surface electric and magnetic currents, are established using the method of moments. Described herein, in order to solve for the unknowns, are four new numerical techniques called LSM, CLSM, CLSM+RCA and CWLSM+RCA, employed to deal with the radar cross section (RCS) of electromagnetic wave scattering from thin dielectric slabs, for different thicknesses in three dimensions. The designations, LSM, CLSM, CLSM+RCA and CWLSM+RCA stand for least squares method, constrained least squares method, constrained least squares method plus ring current approximation and constrained weighted least squares method plus ring current approximation, respectively. The least squares method is utilized in the new numerical techniques, providing a better solution in the null region of the RCS than the combined field integral equation (CFIE). The new numerical techniques employ surface distributions of equivalent currents, thus in principle requiring less computer memory than those employing volume distributions of current density. Moreover, there is no need to worry about how nearly perfect should be the absorbing boundary condition (ABC) that is used in the finite difference time domain technique (FDTD). Further, in this work, the importance of the equivalent surface currents flowing on the edge of a thin slab (which are referred to as ring currents') has been identified. The new techniques also show fast convergence for the particularly challenging case of edge-on wave incidence, even when the slab is as thin as 0.001 λ0 (λ0 is wavelength in free space). In particular, the CLSM+RCA and CWLSM+RCA analyses have been validated by experiments for the case of backward RCS, these experiments

14. A simple suboptimal least-squares algorithm for attitude determination with multiple sensors

NASA Technical Reports Server (NTRS)

Brozenec, Thomas F.; Bender, Douglas J.

1994-01-01

Three-axis attitude determination is equivalent to finding a coordinate transformation matrix which transforms a set of reference vectors fixed in inertial space to a set of measurement vectors fixed in the spacecraft. The attitude determination problem can be expressed as a constrained optimization problem. The constraint is that a coordinate transformation matrix must be proper, real, and orthogonal. A transformation matrix can be thought of as optimal in the least-squares sense if it maps the measurement vectors to the reference vectors with minimal 2-norm errors and meets the above constraint. This constrained optimization problem is known as Wahba's problem. Several algorithms which solve Wahba's problem exactly have been developed and used. These algorithms, while steadily improving, are all rather complicated. Furthermore, they involve such numerically unstable or sensitive operations as matrix determinant, matrix adjoint, and Newton-Raphson iterations. This paper describes an algorithm which minimizes Wahba's loss function, but without the constraint. When the constraint is ignored, the problem can be solved by a straightforward, numerically stable least-squares algorithm such as QR decomposition. Even though the algorithm does not explicitly take the constraint into account, it still yields a nearly orthogonal matrix for most practical cases; orthogonality only becomes corrupted when the sensor measurements are very noisy, on the same order of magnitude as the attitude rotations. The algorithm can be simplified if the attitude rotations are small enough so that the approximation sin(theta) approximately equals theta holds. We then compare the computational requirements for several well-known algorithms. For the general large-angle case, the QR least-squares algorithm is competitive with all other know algorithms and faster than most. If attitude rotations are small, the least-squares algorithm can be modified to run faster, and this modified algorithm is

15. A least-squares minimization approach for model parameters estimate by using a new magnetic anomaly formula

Abo-Ezz, E. R.; Essa, K. S.

2016-04-01

A new linear least-squares approach is proposed to interpret magnetic anomalies of the buried structures by using a new magnetic anomaly formula. This approach depends on solving different sets of algebraic linear equations in order to invert the depth ( z), amplitude coefficient ( K), and magnetization angle ( θ) of buried structures using magnetic data. The utility and validity of the new proposed approach has been demonstrated through various reliable synthetic data sets with and without noise. In addition, the method has been applied to field data sets from USA and India. The best-fitted anomaly has been delineated by estimating the root-mean squared (rms). Judging satisfaction of this approach is done by comparing the obtained results with other available geological or geophysical information.

16. A principal-component and least-squares method for allocating polycyclic aromatic hydrocarbons in sediment to multiple sources

SciTech Connect

Burns, W.A.; Mankiewicz, P.J.; Bence, A.E.; Page, D.S.; Parker, K.R.

1997-06-01

A method was developed to allocate polycyclic aromatic hydrocarbons (PAHs) in sediment samples to the PAH sources from which they came. The method uses principal-component analysis to identify possible sources and a least-squares model to find the source mix that gives the best fit of 36 PAH analytes in each sample. The method identified 18 possible PAH sources in a large set of field data collected in Prince William Sound, Alaska, USA, after the 1989 Exxon Valdez oil spill, including diesel oil, diesel soot, spilled crude oil in various weathering states, natural background, creosote, and combustion products from human activities and forest fires. Spill oil was generally found to be a small increment of the natural background in subtidal sediments, whereas combustion products were often the predominant sources for subtidal PAHs near sites of past or present human activity. The method appears to be applicable to other situations, including other spills.

17. Performance improvement for optimization of the non-linear geometric fitting problem in manufacturing metrology

Moroni, Giovanni; Syam, Wahyudin P.; Petrò, Stefano

2014-08-01

Product quality is a main concern today in manufacturing; it drives competition between companies. To ensure high quality, a dimensional inspection to verify the geometric properties of a product must be carried out. High-speed non-contact scanners help with this task, by both speeding up acquisition speed and increasing accuracy through a more complete description of the surface. The algorithms for the management of the measurement data play a critical role in ensuring both the measurement accuracy and speed of the device. One of the most fundamental parts of the algorithm is the procedure for fitting the substitute geometry to a cloud of points. This article addresses this challenge. Three relevant geometries are selected as case studies: a non-linear least-squares fitting of a circle, sphere and cylinder. These geometries are chosen in consideration of their common use in practice; for example the sphere is often adopted as a reference artifact for performance verification of a coordinate measuring machine (CMM) and a cylinder is the most relevant geometry for a pin-hole relation as an assembly feature to construct a complete functioning product. In this article, an improvement of the initial point guess for the Levenberg-Marquardt (LM) algorithm by employing a chaos optimization (CO) method is proposed. This causes a performance improvement in the optimization of a non-linear function fitting the three geometries. The results show that, with this combination, a higher quality of fitting results a smaller norm of the residuals can be obtained while preserving the computational cost. Fitting an ‘incomplete-point-cloud’, which is a situation where the point cloud does not cover a complete feature e.g. from half of the total part surface, is also investigated. Finally, a case study of fitting a hemisphere is presented.

18. Comparison of Response Surface Construction Methods for Derivative Estimation Using Moving Least Squares, Kriging and Radial Basis Functions

NASA Technical Reports Server (NTRS)

Krishnamurthy, Thiagarajan

2005-01-01

Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.

19. A hybrid least squares and principal component analysis algorithm for Raman spectroscopy.

PubMed

Van de Sompel, Dominique; Garai, Ellis; Zavaleta, Cristina; Gambhir, Sanjiv Sam

2012-01-01

Raman spectroscopy is a powerful technique for detecting and quantifying analytes in chemical mixtures. A critical part of Raman spectroscopy is the use of a computer algorithm to analyze the measured Raman spectra. The most commonly used algorithm is the classical least squares method, which is popular due to its speed and ease of implementation. However, it is sensitive to inaccuracies or variations in the reference spectra of the analytes (compounds of interest) and the background. Many algorithms, primarily multivariate calibration methods, have been proposed that increase robustness to such variations. In this study, we propose a novel method that improves robustness even further by explicitly modeling variations in both the background and analyte signals. More specifically, it extends the classical least squares model by allowing the declared reference spectra to vary in accordance with the principal components obtained from training sets of spectra measured in prior characterization experiments. The amount of variation allowed is constrained by the eigenvalues of this principal component analysis. We compare the novel algorithm to the least squares method with a low-order polynomial residual model, as well as a state-of-the-art hybrid linear analysis method. The latter is a multivariate calibration method designed specifically to improve robustness to background variability in cases where training spectra of the background, as well as the mean spectrum of the analyte, are available. We demonstrate the novel algorithm's superior performance by comparing quantitative error metrics generated by each method. The experiments consider both simulated data and experimental data acquired from in vitro solutions of Raman-enhanced gold-silica nanoparticles. PMID:22723895

20. Resolution of five-component mixture using mean centering ratio and inverse least squares chemometrics

PubMed Central

2013-01-01

Background A comparative study of the use of mean centering of ratio spectra and inverse least squares for the resolution of paracetamol, methylparaben, propylparaben, chlorpheniramine maleate and pseudoephedrine hydrochloride has been achieved showing that the two chemometric methods provide a good example of the high resolving power of these techniques. Method (I) is the mean centering of ratio spectra which depends on using the mean centered ratio spectra in four successive steps that eliminates the derivative steps and therefore the signal to noise ratio is improved. The absorption spectra of prepared solutions were measured in the range of 220–280 nm. Method (II) is based on the inverse least squares that depend on updating developed multivariate calibration model. The absorption spectra of the prepared mixtures in the range 230–270 nm were recorded. Results The linear concentration ranges were 0–25.6, 0–15.0, 0–15.0, 0–45.0 and 0–100.0 μg mL-1 for paracetamol, methylparaben, propylparaben, chlorpheniramine maleate and pseudoephedrine hydrochloride, respectively. The mean recoveries for simultaneous determination were between 99.9-101.3% for the two methods. The two developed methods have been successfully used for prediction of five-component mixture in Decamol Flu syrup with good selectivity, high sensitivity and extremely low detection limit. Conclusion No published method has been reported for simultaneous determination of the five components of this mixture so that the results of the mean centering of ratio spectra method were compared with those of the proposed inverse least squares method. Statistical comparison was performed using t-test and F-ratio at P = 0.05. There was no significant difference between the results. PMID:24028626

1. Combination of the Manifold Dimensionality Reduction Methods with Least Squares Support vector machines for Classifying the Species of Sorghum Seeds.

PubMed

Chen, Y M; Lin, P; He, J Q; He, Y; Li, X L

2016-01-01

This study was carried out for rapid and noninvasive determination of the class of sorghum species by using the manifold dimensionality reduction (MDR) method and the nonlinear regression method of least squares support vector machines (LS-SVM) combing with the mid-infrared spectroscopy (MIRS) techniques. The methods of Durbin and Run test of augmented partial residual plot (APaRP) were performed to diagnose the nonlinearity of the raw spectral data. The nonlinear MDR methods of isometric feature mapping (ISOMAP), local linear embedding, laplacian eigenmaps and local tangent space alignment, as well as the linear MDR methods of principle component analysis and metric multidimensional scaling were employed to extract the feature variables. The extracted characteristic variables were utilized as the input of LS-SVM and established the relationship between the spectra and the target attributes. The mean average precision (MAP) scores and prediction accuracy were respectively used to evaluate the performance of models. The prediction results showed that the ISOMAP-LS-SVM model obtained the best classification performance, where the MAP scores and prediction accuracy were 0.947 and 92.86%, respectively. It can be concluded that the ISOMAP-LS-SVM model combined with the MIRS technique has the potential of classifying the species of sorghum in a reasonable accuracy. PMID:26817580

2. Combination of the Manifold Dimensionality Reduction Methods with Least Squares Support vector machines for Classifying the Species of Sorghum Seeds

Chen, Y. M.; Lin, P.; He, J. Q.; He, Y.; Li, X. L.

2016-01-01

This study was carried out for rapid and noninvasive determination of the class of sorghum species by using the manifold dimensionality reduction (MDR) method and the nonlinear regression method of least squares support vector machines (LS-SVM) combing with the mid-infrared spectroscopy (MIRS) techniques. The methods of Durbin and Run test of augmented partial residual plot (APaRP) were performed to diagnose the nonlinearity of the raw spectral data. The nonlinear MDR methods of isometric feature mapping (ISOMAP), local linear embedding, laplacian eigenmaps and local tangent space alignment, as well as the linear MDR methods of principle component analysis and metric multidimensional scaling were employed to extract the feature variables. The extracted characteristic variables were utilized as the input of LS-SVM and established the relationship between the spectra and the target attributes. The mean average precision (MAP) scores and prediction accuracy were respectively used to evaluate the performance of models. The prediction results showed that the ISOMAP-LS-SVM model obtained the best classification performance, where the MAP scores and prediction accuracy were 0.947 and 92.86%, respectively. It can be concluded that the ISOMAP-LS-SVM model combined with the MIRS technique has the potential of classifying the species of sorghum in a reasonable accuracy.

3. Combination of the Manifold Dimensionality Reduction Methods with Least Squares Support vector machines for Classifying the Species of Sorghum Seeds

PubMed Central

Chen, Y. M.; Lin, P.; He, J. Q.; He, Y.; Li, X.L.

2016-01-01

This study was carried out for rapid and noninvasive determination of the class of sorghum species by using the manifold dimensionality reduction (MDR) method and the nonlinear regression method of least squares support vector machines (LS-SVM) combing with the mid-infrared spectroscopy (MIRS) techniques. The methods of Durbin and Run test of augmented partial residual plot (APaRP) were performed to diagnose the nonlinearity of the raw spectral data. The nonlinear MDR methods of isometric feature mapping (ISOMAP), local linear embedding, laplacian eigenmaps and local tangent space alignment, as well as the linear MDR methods of principle component analysis and metric multidimensional scaling were employed to extract the feature variables. The extracted characteristic variables were utilized as the input of LS-SVM and established the relationship between the spectra and the target attributes. The mean average precision (MAP) scores and prediction accuracy were respectively used to evaluate the performance of models. The prediction results showed that the ISOMAP-LS-SVM model obtained the best classification performance, where the MAP scores and prediction accuracy were 0.947 and 92.86%, respectively. It can be concluded that the ISOMAP-LS-SVM model combined with the MIRS technique has the potential of classifying the species of sorghum in a reasonable accuracy. PMID:26817580

4. Least squares support vector machines for direction of arrival estimation with error control and validation.

SciTech Connect

Christodoulou, Christos George (University of New Mexico, Albuquerque, NM); Abdallah, Chaouki T. (University of New Mexico, Albuquerque, NM); Rohwer, Judd Andrew

2003-02-01

The paper presents a multiclass, multilabel implementation of least squares support vector machines (LS-SVM) for direction of arrival (DOA) estimation in a CDMA system. For any estimation or classification system, the algorithm's capabilities and performance must be evaluated. Specifically, for classification algorithms, a high confidence level must exist along with a technique to tag misclassifications automatically. The presented learning algorithm includes error control and validation steps for generating statistics on the multiclass evaluation path and the signal subspace dimension. The error statistics provide a confidence level for the classification accuracy.

5. Retinal Oximetry with 510-600 nm Light Based on Partial Least-Squares Regression Technique

Arimoto, Hidenobu; Furukawa, Hiromitsu

2010-11-01

The oxygen saturation distribution in the retinal blood stream is estimated by measuring spectral images and adopting the partial-least squares regression. The wavelengths range used for the calculation is from 510 to 600 nm. The regression model for estimating the retinal oxygen saturation is built on the basis of the arterial and venous blood spectra. The experiment is performed using an originally designed spectral ophthalmoscope. The obtained two-dimensional (2D) oxygen saturation indicates the reasonable oxygen level across the retina. The measurement quality is compared with those obtained using other wavelengths sets and data processing methods.

6. Distribution of error in least-squares solution of an overdetermined system of linear simultaneous equations

NASA Technical Reports Server (NTRS)

Miller, C. D.

1972-01-01

Probability density functions were derived for errors in the evaluation of unknowns by the least squares method in system of nonhomogeneous linear equations. Coefficients of the unknowns were assumed correct and computational precision were also assumed. A vector space was used, with number of dimensions equal to the number of equations. An error vector was defined and assumed to have uniform distribution of orientation throughout the vector space. The density functions are shown to be insensitive to the biasing effects of the source of the system of equations.

7. Review of the Palisades pressure vessel accumulated fluence estimate and of the least squares methodology employed

SciTech Connect

Griffin, P.J.

1998-05-01

This report provides a review of the Palisades submittal to the Nuclear Regulatory Commission requesting endorsement of their accumulated neutron fluence estimates based on a least squares adjustment methodology. This review highlights some minor issues in the applied methodology and provides some recommendations for future work. The overall conclusion is that the Palisades fluence estimation methodology provides a reasonable approach to a {open_quotes}best estimate{close_quotes} of the accumulated pressure vessel neutron fluence and is consistent with the state-of-the-art analysis as detailed in community consensus ASTM standards.

8. Small-kernel, constrained least-squares restoration of sampled image data

NASA Technical Reports Server (NTRS)

Hazra, Rajeeb; Park, Stephen K.

1992-01-01

Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.

9. 3-D foliation unfolding with volume and bed-length least-squares conservation

SciTech Connect

Leger, M.; Morvan, J.M.; Thibaut, M.

1994-12-31

Restoration of a geologic structure at earlier times is a good means to criticize, and next to improve, its interpretation. Restoration softwares already exist in 2D, but a lot of work remains to be done in 3D. The authors focus on the interbedding slip phenomenon, with bed-length and volume conservation. They unfold a (geometrical) foliation by optimizing following least-squares criteria: horizontalness, bed-length and volume conservation, under equality constraints related to the position of the binding or pin-surface

10. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

NASA Technical Reports Server (NTRS)

Frisbee, Joseph H., Jr.

2011-01-01

State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

11. Voronoi based discrete least squares meshless method for heat conduction simulation in highly irregular geometries

2016-01-01

A new technique is used in Discrete Least Square Meshfree(DLSM) method to remove the common existing deficiencies of meshfree methods in handling of the problems containing cracks or concave boundaries. An enhanced Discrete Least Squares Meshless method named as VDLSM(Voronoi based Discrete Least Squares Meshless) is developed in order to solve the steady-state heat conduction problem in irregular solid domains including concave boundaries or cracks. Existing meshless methods cannot estimate precisely the required unknowns in the vicinity of the above mentioned boundaries. Conducted researches are limited to domains with regular convex boundaries. To this end, the advantages of the Voronoi tessellation algorithm are implemented. The support domains of the sampling points are determined using a Voronoi tessellation algorithm. For the weight functions, a cubic spline polynomial is used based on a normalized distance variable which can provide a high degree of smoothness near those mentioned above discontinuities. Finally, Moving Least Squares(MLS) shape functions are constructed using a varitional method. This straight-forward scheme can properly estimate the unknowns(in this particular study, the temperatures at the nodal points) near and on the crack faces, crack tip or concave boundaries without need to extra backward corrective procedures, i.e. the iterative calculations for modifying the shape functions of the nodes located near or on these types of the complex boundaries. The accuracy and efficiency of the presented method are investigated by analyzing four particular examples. Obtained results from VDLSM are compared with the available analytical results or with the results of the well-known Finite Elements Method(FEM) when an analytical solution is not available. By comparisons, it is revealed that the proposed technique gives high accuracy for the solution of the steady-state heat conduction problems within cracked domains or domains with concave boundaries

12. In situ ply strength: An initial assessment. [using laminate fracture data and a least squares method

NASA Technical Reports Server (NTRS)

Chamis, C. C.; Sullivan, T. L.

1978-01-01

The in situ ply strengths in several composites were calculated using laminate fracture data for appropriate low modulus, and high modulus fiber composites were used in conjunction with the least squares method. The laminate fracture data were obtained from tests on Modmor-I graphite/epoxy, AS-graphite/epoxy, boron/epoxy and E-glass/epoxy. The results show that the calculated in situ ply strengths can be considerably different from those measured in unidirectional composites, especially the transverse strengths and those in angleplied laminates with transply cracks.

13. Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem

SciTech Connect

Yoo, Jaechil

1996-12-31

Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.

14. A least-squares method for second order noncoercive elliptic partial differential equations

Ku, Jaeun

2007-03-01

In this paper, we consider a least-squares method proposed by Bramble, Lazarov and Pasciak (1998) which can be thought of as a stabilized Galerkin method for noncoercive problems with unique solutions. We modify their method by weakening the strength of the stabilization terms and present various new error estimates. The modified method has all the desirable properties of the original method; indeed, we shall show some theoretical properties that are not known for the original method. At the same time, our numerical experiments show an improvement of the method due to the modification.

15. Simultaneous spectrophotometric determination of four metals by two kinds of partial least squares methods

Gao, Ling; Ren, Shouxin

2005-10-01

Simultaneous determination of Ni(II), Cd(II), Cu(II) and Zn(II) was studied by two methods, kernel partial least squares (KPLS) and wavelet packet transform partial least squares (WPTPLS), with xylenol orange and cetyltrimethyl ammonium bromide as reagents in the medium pH = 9.22 borax-hydrochloric acid buffer solution. Two programs, PKPLS and PWPTPLS, were designed to perform the calculations. Data reduction was performed using kernel matrices and wavelet packet transform, respectively. In the KPLS method, the size of the kernel matrix is only dependent on the number of samples, thus the method was suitable for the data matrix with many wavelengths and fewer samples. Wavelet packet representations of signals provide a local time-frequency description, thus in the wavelet packet domain, the quality of the noise removal can be improved. In the WPTPLS by optimization, wavelet function and decomposition level were selected as Daubeches 12 and 5, respectively. Experimental results showed both methods to be successful even where there was severe overlap of spectra.

16. Comparison of approaches for parameter estimation on stochastic models: Generic least squares versus specialized approaches.

PubMed

Zimmer, Christoph; Sahle, Sven

2016-04-01

Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences. PMID:26826353

17. Quantifying silica in filter-deposited mine dusts using infrared spectra and partial least squares regression.

PubMed

Weakley, Andrew Todd; Miller, Arthur L; Griffiths, Peter R; Bayman, Sean J

2014-07-01

The feasibility of measuring airborne crystalline silica (α-quartz) in noncoal mine dusts using a direct-on-filter method of analysis is demonstrated. Respirable α-quartz was quantified by applying a partial least squares (PLS) regression to the infrared transmission spectra of mine-dust samples deposited on porous polymeric filters. This direct-on-filter method deviates from the current regulatory determination of respirable α-quartz by refraining from ashing the sampling filter and redepositing the analyte prior to quantification using either infrared spectrometry for coal mines or x-ray diffraction (XRD) from noncoal mines. Since XRD is not field portable, this study evaluated the efficacy of Fourier transform infrared spectrometry for silica determination in noncoal mine dusts. PLS regressions were performed using select regions of the spectra from nonashed samples with important wavenumbers selected using a novel modification to the Monte Carlo unimportant variable elimination procedure. Wavenumber selection helped to improve PLS prediction, reduce the number of required PLS factors, and identify additional silica bands distinct from those currently used in regulatory enforcement. PLS regression appeared robust against the influence of residual filter and extraneous mineral absorptions while outperforming ordinary least squares calibration. These results support the quantification of respirable silica in noncoal mines using field-portable infrared spectrometers. PMID:24830397

18. Online segmentation of time series based on polynomial least-squares approximations.

PubMed

Fuchs, Erich; Gruber, Thiemo; Nitschke, Jiri; Sick, Bernhard

2010-12-01

The paper presents SwiftSeg, a novel technique for online time series segmentation and piecewise polynomial representation. The segmentation approach is based on a least-squares approximation of time series in sliding and/or growing time windows utilizing a basis of orthogonal polynomials. This allows the definition of fast update steps for the approximating polynomial, where the computational effort depends only on the degree of the approximating polynomial and not on the length of the time window. The coefficients of the orthogonal expansion of the approximating polynomial-obtained by means of the update steps-can be interpreted as optimal (in the least-squares sense) estimators for average, slope, curvature, change of curvature, etc., of the signal in the time window considered. These coefficients, as well as the approximation error, may be used in a very intuitive way to define segmentation criteria. The properties of SwiftSeg are evaluated by means of some artificial and real benchmark time series. It is compared to three different offline and online techniques to assess its accuracy and runtime. It is shown that SwiftSeg-which is suitable for many data streaming applications-offers high accuracy at very low computational costs. PMID:20975120

19. Least-squares finite-element scheme for the lattice Boltzmann method on an unstructured mesh.

PubMed

Li, Yusong; LeBoeuf, Eugene J; Basu, P K

2005-10-01

A numerical model of the lattice Boltzmann method (LBM) utilizing least-squares finite-element method in space and the Crank-Nicolson method in time is developed. This method is able to solve fluid flow in domains that contain complex or irregular geometric boundaries by using the flexibility and numerical stability of a finite-element method, while employing accurate least-squares optimization. Fourth-order accuracy in space and second-order accuracy in time are derived for a pure advection equation on a uniform mesh; while high stability is implied from a von Neumann linearized stability analysis. Implemented on unstructured mesh through an innovative element-by-element approach, the proposed method requires fewer grid points and less memory compared to traditional LBM. Accurate numerical results are presented through two-dimensional incompressible Poiseuille flow, Couette flow, and flow past a circular cylinder. Finally, the proposed method is applied to estimate the permeability of a randomly generated porous media, which further demonstrates its inherent geometric flexibility. PMID:16383571

20. Online Least Squares One-Class Support Vector Machines-Based Abnormal Visual Event Detection

PubMed Central

Wang, Tian; Chen, Jie; Zhou, Yi; Snoussi, Hichem

2013-01-01

The abnormal event detection problem is an important subject in real-time video surveillance. In this paper, we propose a novel online one-class classification algorithm, online least squares one-class support vector machine (online LS-OC-SVM), combined with its sparsified version (sparse online LS-OC-SVM). LS-OC-SVM extracts a hyperplane as an optimal description of training objects in a regularized least squares sense. The online LS-OC-SVM learns a training set with a limited number of samples to provide a basic normal model, then updates the model through remaining data. In the sparse online scheme, the model complexity is controlled by the coherence criterion. The online LS-OC-SVM is adopted to handle the abnormal event detection problem. Each frame of the video is characterized by the covariance matrix descriptor encoding the moving information, then is classified into a normal or an abnormal frame. Experiments are conducted, on a two-dimensional synthetic distribution dataset and a benchmark video surveillance dataset, to demonstrate the promising results of the proposed online LS-OC-SVM method. PMID:24351629

1. On the decoding of intracranial data using sparse orthonormalized partial least squares

van Gerven, Marcel A. J.; Chao, Zenas C.; Heskes, Tom

2012-04-01

It has recently been shown that robust decoding of motor output from electrocorticogram signals in monkeys over prolonged periods of time has become feasible (Chao et al 2010 Front. Neuroeng. 3 1-10 ). In order to achieve these results, multivariate partial least-squares (PLS) regression was used. PLS uses a set of latent variables, referred to as components, to model the relationship between the input and the output data and is known to handle high-dimensional and possibly strongly correlated inputs and outputs well. We developed a new decoding method called sparse orthonormalized partial least squares (SOPLS) which was tested on a subset of the data used in Chao et al (2010) (freely obtainable from neurotycho.org (Nagasaka et al 2011 PLoS ONE 6 e22561)). We show that SOPLS reaches the same decoding performance as PLS using just two sparse components which can each be interpreted as encoding particular combinations of motor parameters. Furthermore, the sparse solution afforded by the SOPLS model allowed us to show the functional involvement of beta and gamma band responses in premotor and motor cortex for predicting the first component. Based on the literature, we conjecture that this first component is involved in the encoding of movement direction. Hence, the sparse and compact representation afforded by the SOPLS model facilitates interpretation of which spectral, spatial and temporal components are involved in successful decoding. These advantages make the proposed decoding method an important new tool in neuroprosthetics.

2. A least square real time quality control routine for the North Warning Netted Radar System

Leung, Henry; Blanchette, Martin

1994-12-01

The ground surveillance radar group of the Radar and Space Division of DREO has a requirement to investigate the feasibility and propose a cost effective approach of correcting the Real Time Quality Control (RTQC) registration error problem of the North Warning System (NWS). The U.S. developed RTQC algorithm works poorly in northern Canadian radar sites. This is mainly caused by the deficiency of the RTQC algorithm to calculate properly the radar position bias when there is low aircraft traffic in areas of overlapping radar coverage. This problem results in track ambiguity and in display of ghost tracks. In this report, a modification of the RTQC algorithm using least-square techniques is proposed. The proposed least-square RTQC (LS-RTQC) algorithm was tested with real recorded data from the NWS. The LS-RTQC algorithm was found to work efficiently on the NWS data in a sense that it works properly in a low aircraft traffic environment with low computational complexity. The algorithm has been sent to the NORAD software support unit at Tyndall Air Force Base for testing.

3. [Biomass Compositional Analysis Using Sparse Partial Least Squares Regression and Near Infrared Spectrum Technique].

PubMed

Yao, Yan; Wang, Chang-yue; Liu, Hui-jun; Tang, Jian-bin; Cai, Jin-hui; Wang, Jing-jun

2015-07-01

Forest bio-fuel, a new type renewable energy, has attracted increasing attention as a promising alternative. In this study, a new method called Sparse Partial Least Squares Regression (SPLS) is used to construct the proximate analysis model to analyze the fuel characteristics of sawdust combining Near Infrared Spectrum Technique. Moisture, Ash, Volatile and Fixed Carbon percentage of 80 samples have been measured by traditional proximate analysis. Spectroscopic data were collected by Nicolet NIR spectrometer. After being filtered by wavelet transform, all of the samples are divided into training set and validation set according to sample category and producing area. SPLS, Principle Component Regression (PCR), Partial Least Squares Regression (PLS) and Least Absolute Shrinkage and Selection Operator (LASSO) are presented to construct prediction model. The result advocated that SPLS can select grouped wavelengths and improve the prediction performance. The absorption peaks of the Moisture is covered in the selected wavelengths, well other compositions have not been confirmed yet. In a word, SPLS can reduce the dimensionality of complex data sets and interpret the relationship between spectroscopic data and composition concentration, which will play an increasingly important role in the field of NIR application. PMID:26717741

4. Interval analysis approach to rank determination in linear least squares problems

SciTech Connect

Manteuffel, T.A.

1980-06-01

The linear least-squares problem Ax approx. = b has a unique solution only if the matrix A has full column rank. Numerical rank determination is difficult, especially in the presence of uncertainties in the elements of A. This paper proposes an interval analysis approach. A set of matrices A/sup I/ is defined that contains all possible perturbations of A due to uncertainties; A/sup I/ is said to be rank deficient if any member of A/sup I/ is rank deficient. A modification to the Q-R decomposition method of solution of the least-squares problem allows a determination of the rank of A/sup I/ and a partial interval analysis of the solution vector x. This procedure requires the computation of R/sup -1/. Another modification is proposed which determines the rank of A/sup I/ without computing R/sup -1/. The additional computational effort is O(N/sup 2/), where N is the column dimension of A. 4 figures.

5. Least Squares Shadowing Sensitivity Analysis of Chaotic and Turbulent Fluid Flows

Blonigan, Patrick; Wang, Qiqi; Gomez, Steven

2013-11-01

Computational methods for sensitivity analysis are invaluable tools for fluid dynamics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in chaotic fluid flow fields, such as those obtained using high-fidelity turbulence simulations. This break down is due to the Butterfly Effect'' the high sensitivity of chaotic dynamical systems to the initial condition. A new sensitivity analysis method developed by the authors, Least Squares Shadowing (LSS), can compute useful and accurate gradients for quantities of interest in chaotic and turbulent fluid flows. LSS computes gradients using the shadow trajectory,'' a phase space trajectory (or solution) for which perturbations to the flow field do not grow exponentially in time. This talk will outline Least Squares Shadowing and demonstrate it on several chaotic and turbulent fluid flows, including homogeneous isotropic turbulence, Rayleigh-Bénard convection and turbulent channel flow. We would like to acknowledge AFSOR Award F11B-T06-0007 under Dr. Fariba Fahroo, NASA Award NNH11ZEA001N under Dr. Harold Atkins, as well as financial support from ConocoPhillips, the NDSEG fellowship and the ANSYS Fellowship.

6. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization

PubMed Central

Zhu, Qingxin; Niu, Xinzheng

2016-01-01

By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L2 and L1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996

7. An Augmented Classical Least Squares Method for Quantitative Raman Spectral Analysis against Component Information Loss

PubMed Central

Zhou, Yan; Cao, Hui

2013-01-01

We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR. PMID:23956689

8. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

NASA Technical Reports Server (NTRS)

Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

1991-01-01

The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

9. Determination of Protein Secondary Structure from Infrared Spectra Using Partial Least-Squares Regression.

PubMed

Wilcox, Kieaibi E; Blanch, Ewan W; Doig, Andrew J

2016-07-12

Infrared (IR) spectra contain substantial information about protein structure. This has previously most often been exploited by using known band assignments. Here, we convert spectral intensities in bins within Amide I and II regions to vectors and apply machine learning methods to determine protein secondary structure. Partial least squares was performed on spectra of 90 proteins in H2O. After preprocessing and removal of outliers, 84 proteins were used for this work. Standard normal variate and second-derivative preprocessing methods on the combined Amide I and II data generally gave the best performance, with root-mean-square values for prediction of ∼12% for α-helix, ∼7% for β-sheet, 7% for antiparallel β-sheet, and ∼8% for other conformations. Analysis of Fourier transform infrared (FTIR) spectra of 16 proteins in D2O showed that secondary structure determination was slightly poorer than in H2O. Interval partial least squares was used to identify the critical regions within spectra for secondary structure prediction and showed that the sides of bands were most valuable, rather than their peak maxima. In conclusion, we have shown that multivariate analysis of protein FTIR spectra can give α-helix, β-sheet, other, and antiparallel β-sheet contents with good accuracy, comparable to that of circular dichroism, which is widely used for this purpose. PMID:27322779

10. Modeling individual HRTF tensor using high-order partial least squares

Huang, Qinghua; Li, Lin

2014-12-01

A tensor is used to describe head-related transfer functions (HRTFs) depending on frequencies, sound directions, and anthropometric parameters. It keeps the multi-dimensional structure of measured HRTFs. To construct a multi-linear HRTF personalization model, an individual core tensor is extracted from the original HRTFs using high-order singular value decomposition (HOSVD). The individual core tensor in lower-dimensional space acts as the output of the multi-linear model. Some key anthropometric parameters as the inputs of the model are selected by Laplacian scores and correlation analyses between all the measured parameters and the individual core tensor. Then, the multi-linear regression model is constructed by high-order partial least squares (HOPLS), aiming to seek a joint subspace approximation for both the selected parameters and the individual core tensor. The numbers of latent variables and loadings are used to control the complexity of the model and prevent overfitting feasibly. Compared with the partial least squares regression (PLSR) method, objective simulations demonstrate the better performance for predicting individual HRTFs especially for the sound directions ipsilateral to the concerned ear. The subjective listening tests show that the predicted individual HRTFs are approximate to the measured HRTFs for the sound localization.

11. Random dynamic load identification based on error analysis and weighted total least squares method

Jia, You; Yang, Zhichun; Guo, Ning; Wang, Le

2015-12-01

In most cases, random dynamic load identification problems in structural dynamics are in general ill-posed. A common approach to treat these problems is to reformulate these problems into some well-posed problems by some numerical regularization methods. In a previous paper by the authors, a random dynamic load identification model was built, and a weighted regularization approach based on the proper orthogonal decomposition (POD) was proposed to identify the random dynamic loads. In this paper, the upper bound of relative load identification error in frequency domain is derived. The selection condition and the specific form of the weighting matrix is also proposed and validated analytically and experimentally, In order to improve the accuracy of random dynamic load identification, a weighted total least squares method is proposed to reduce the impact of these errors. To further validate the feasibility and effectiveness of the proposed method, the comparative study of the proposed method and other methods are conducted with the experiment. The experimental results demonstrated that the weighted total least squares method is more effective than other methods for random dynamic load identification.

12. Using Perturbed QR Factorizations To Solve Linear Least-Squares Problems

SciTech Connect

Avron, Haim; Ng, Esmond G.; Toledo, Sivan

2008-03-21

We propose and analyze a new tool to help solve sparse linear least-squares problems min{sub x} {parallel}Ax-b{parallel}{sub 2}. Our method is based on a sparse QR factorization of a low-rank perturbation {cflx A} of A. More precisely, we show that the R factor of {cflx A} is an effective preconditioner for the least-squares problem min{sub x} {parallel}Ax-b{parallel}{sub 2}, when solved using LSQR. We propose applications for the new technique. When A is rank deficient we can add rows to ensure that the preconditioner is well-conditioned without column pivoting. When A is sparse except for a few dense rows we can drop these dense rows from A to obtain {cflx A}. Another application is solving an updated or downdated problem. If R is a good preconditioner for the original problem A, it is a good preconditioner for the updated/downdated problem {cflx A}. We can also solve what-if scenarios, where we want to find the solution if a column of the original matrix is changed/removed. We present a spectral theory that analyzes the generalized spectrum of the pencil (A*A,R*R) and analyze the applications.

13. Limited-memory BFGS based least-squares pre-stack Kirchhoff depth migration

Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu

2015-08-01

Least-squares migration (LSM) is a linearized inversion technique for subsurface reflectivity estimation. Compared to conventional migration algorithms, it can improve spatial resolution significantly with a few iterative calculations. There are three key steps in LSM, (1) calculate data residuals between observed data and demigrated data using the inverted reflectivity model; (2) migrate data residuals to form reflectivity gradient and (3) update reflectivity model using optimization methods. In order to obtain an accurate and high-resolution inversion result, the good estimation of inverse Hessian matrix plays a crucial role. However, due to the large size of Hessian matrix, the inverse matrix calculation is always a tough task. The limited-memory BFGS (L-BFGS) method can evaluate the Hessian matrix indirectly using a limited amount of computer memory which only maintains a history of the past m gradients (often m < 10). We combine the L-BFGS method with least-squares pre-stack Kirchhoff depth migration. Then, we validate the introduced approach by the 2-D Marmousi synthetic data set and a 2-D marine data set. The results show that the introduced method can effectively obtain reflectivity model and has a faster convergence rate with two comparison gradient methods. It might be significant for general complex subsurface imaging.

14. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization.

PubMed

Zhang, Chunyuan; Zhu, Qingxin; Niu, Xinzheng

2016-01-01

By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996

15. Non-negative least-squares variance component estimation with application to GPS time series

Amiri-Simkooei, A. R.

2016-05-01

The problem of negative variance components is probable to occur in many geodetic applications. This problem can be avoided if non-negativity constraints on variance components (VCs) are introduced to the stochastic model. Based on the standard non-negative least-squares (NNLS) theory, this contribution presents the method of non-negative least-squares variance component estimation (NNLS-VCE). The method is easy to understand, simple to implement, and efficient in practice. The NNLS-VCE is then applied to the coordinate time series of the permanent GPS stations to simultaneously estimate the amplitudes of different noise components such as white noise, flicker noise, and random walk noise. If a noise model is unlikely to be present, its amplitude is automatically estimated to be zero. The results obtained from 350 GPS permanent stations indicate that the noise characteristics of the GPS time series are well described by combination of white noise and flicker noise. This indicates that all time series contain positive noise amplitudes for white and flicker noise. In addition, around two-thirds of the series consist of random walk noise, of which its average amplitude is the (small) value of 0.16, 0.13, and 0.45 { mm/year }^{1/2} for the north, east, and up components, respectively. Also, about half of the positive estimated amplitudes of random walk noise are statistically significant, indicating that one-third of the total time series have significant random walk noise.

16. Retrieval of Vegetation Water Content from Reflectance Using Genetic Algorithm-Partial Least Squares Regression and Neural Networks

Li, L.; Riaño, D.; Patricio, M.; Cheng, Y.; Ustin, S.

Remote estimation of vegetation water content has important implications in agricultural management practices and forest fire monitoring Vegetation water content is also found useful in estimating leaf area index using optical remote sensing methods This study aims to investigate the performance of genetic algorithms coupled with partial least squares GA-PLS modeling of spectral reflectance in retrieving equivalent water thickness EWT at leaf and canopy level and to compare results from GA-PLS modeling with those from using artificial neural networks ANN The genetic algorithm is used to identify a subset of spectral bands sensitive to the variation in EWT and PLS is then applied to relate the reflectance of the identified bands to ETW values The advantage of using ANN is to model nonlinear transfer functions at a higher accuracy than regression analysis GA-PLS and ANN were applied to LOPEX dataset datasets simulated by a leaf radiative transfer model PROSPECT and a canopy radiative transfer model SAILH and to remotely sensed AVIRIS and MODIS imagery The results indicate that GA-PLS and ANN both have capability of retrieving EWT from measured and simulated leaf reflectance and achieved very good prediction r 2 0 90 The retrieval of using real and simulated canopy data indicates that both GA-PLS and ANN have degraded performances due to the effects of soil background and leaf dry matter on the reflectance but the retrieving accuracies were still highly valuable The result also shows that although nonlinear transfer functions

17. Comparative artificial neural network and partial least squares models for analysis of Metronidazole, Diloxanide, Spiramycin and Cliquinol in pharmaceutical preparations

2014-09-01

Metronidazole (MNZ) is a widely used antibacterial and amoebicide drug. Therefore, it is important to develop a rapid and specific analytical method for the determination of MNZ in mixture with Spiramycin (SPY), Diloxanide (DIX) and Cliquinol (CLQ) in pharmaceutical preparations. This work describes simple, sensitive and reliable six multivariate calibration methods, namely linear and nonlinear artificial neural networks preceded by genetic algorithm (GA-ANN) and principle component analysis (PCA-ANN) as well as partial least squares (PLS) either alone or preceded by genetic algorithm (GA-PLS) for UV spectrophotometric determination of MNZ, SPY, DIX and CLQ in pharmaceutical preparations with no interference of pharmaceutical additives. The results manifest the problem of nonlinearity and how models like ANN can handle it. Analytical performance of these methods was statistically validated with respect to linearity, accuracy, precision and specificity. The developed methods indicate the ability of the previously mentioned multivariate calibration models to handle and solve UV spectra of the four components’ mixtures using easy and widely used UV spectrophotometer.

18. Radio astronomical image formation using constrained least squares and Krylov subspaces

2016-04-01

Aims: Image formation for radio astronomy can be defined as estimating the spatial intensity distribution of celestial sources throughout the sky, given an array of antennas. One of the challenges with image formation is that the problem becomes ill-posed as the number of pixels becomes large. The introduction of constraints that incorporate a priori knowledge is crucial. Methods: In this paper we show that in addition to non-negativity, the magnitude of each pixel in an image is also bounded from above. Indeed, the classical "dirty image" is an upper bound, but a much tighter upper bound can be formed from the data using array processing techniques. This formulates image formation as a least squares optimization problem with inequality constraints. We propose to solve this constrained least squares problem using active set techniques, and the steps needed to implement it are described. It is shown that the least squares part of the problem can be efficiently implemented with Krylov-subspace-based techniques. We also propose a method for correcting for the possible mismatch between source positions and the pixel grid. This correction improves both the detection of sources and their estimated intensities. The performance of these algorithms is evaluated using simulations. Results: Based on parametric modeling of the astronomical data, a new imaging algorithm based on convex optimization, active sets, and Krylov-subspace-based solvers is presented. The relation between the proposed algorithm and sequential source removing techniques is explained, and it gives a better mathematical framework for analyzing existing algorithms. We show that by using the structure of the algorithm, an efficient implementation that allows massive parallelism and storage reduction is feasible. Simulations are used to compare the new algorithm to classical CLEAN. Results illustrate that for a discrete point model, the proposed algorithm is capable of detecting the correct number of sources

19. Parameter estimation in PS-InSAR deformation studies using integer least-squares techniques

Hanssen, R. F.; Ferretti, A.

2002-12-01

Interferometric synthetic aperture radar (InSAR) methods are increasingly used for measuring deformations of the earth's surface. Unfortunately, in many cases the problem of temporal decorrelation hampers successful measurements over longer time intervals. The permanent scatterers approach (PS-InSAR) for processing time series of SAR interferograms proves to be a good alternative by recognizing and analyzing single scatterers with a reliable phase behavior in time. Ambiguity resolution or phase unwrapping is the process of resolving the unknown cycle ambiguities in the radar data, and is one of the main problems in InSAR data analysis. In a single interferogram, the problem of phase unwrapping and parameter estimation is usually solved for in separate consecutive computations. It is often assumed that the final result of the phase unwrapping is a deterministic signal, used as input for the parameter estimation, e.g. elevation and deformation. As a result, errors in the ambiguity resolution are usually not propagated into the final results, which can lead to a serious underestimation of errors in the parameters and consequently in the geophysical models which use these parameters. In fact, however, the resolved phase ambiguities are stochastic as well, even though they are described with a probability mass function in stead of a probability density function. In this paper, the integer least-squares technique for integrated ambiguity resolution and parameter estimation is applied to PS-InSAR data analysis, using a three-step procedure. First, a standard least-squares adjustment is performed, assuming the ambiguities are float parameters, leading to the real-valued 'float'-solution. Second, the ambiguities are resolved using the float ambiguity estimates. Third, if the second step was successful, the integer estimates are used to correct the float solution estimate. It has been proved that the integer least-squares estimator is an optimal method in the sense that it

20. Two new methods for solving large scale least squares in geodetic surveying computations

Murigande, Ch.; Toint, Ph. L.; Paquet, P.

1986-12-01

This paper considers the solution of linear least squares problems arising in space geodesy, with a special application to multistation adjustment by a short arc method based on Doppler observations. The widely used second-order regression algorithm due to Brown (1976) for reducing the normal equations system is briefly recalled. Then two algorithms which avoid the use of the normal equations are proposed. The first one is a direct method that applies orthogonal transformations to the observation matrix directly, in order to reduce it to upper triangular form. The solution is then obtained by back-substitution. The second method is iterative and uses a preconditioned conjugate gradient technique. A comparison of the three procedures is provided on data of the second European Doppler Observation Campaign.

1. Prediction of biochar yield from cattle manure pyrolysis via least squares support vector machine intelligent approach.

PubMed

Cao, Hongliang; Xin, Ya; Yuan, Qiaoxia

2016-02-01

To predict conveniently the biochar yield from cattle manure pyrolysis, intelligent modeling approach was introduced in this research. A traditional artificial neural networks (ANN) model and a novel least squares support vector machine (LS-SVM) model were developed. For the identification and prediction evaluation of the models, a data set with 33 experimental data was used, which were obtained using a laboratory-scale fixed bed reaction system. The results demonstrated that the intelligent modeling approach is greatly convenient and effective for the prediction of the biochar yield. In particular, the novel LS-SVM model has a more satisfying predicting performance and its robustness is better than the traditional ANN model. The introduction and application of the LS-SVM modeling method gives a successful example, which is a good reference for the modeling study of cattle manure pyrolysis process, even other similar processes. PMID:26708483

2. Quantification of brain lipids by FTIR spectroscopy and partial least squares regression

Dreissig, Isabell; Machill, Susanne; Salzer, Reiner; Krafft, Christoph

2009-01-01

Brain tissue is characterized by high lipid content. Its content decreases and the lipid composition changes during transformation from normal brain tissue to tumors. Therefore, the analysis of brain lipids might complement the existing diagnostic tools to determine the tumor type and tumor grade. Objective of this work is to extract lipids from gray matter and white matter of porcine brain tissue, record infrared (IR) spectra of these extracts and develop a quantification model for the main lipids based on partial least squares (PLS) regression. IR spectra of the pure lipids cholesterol, cholesterol ester, phosphatidic acid, phosphatidylcholine, phosphatidylethanolamine, phosphatidylserine, phosphatidylinositol, sphingomyelin, galactocerebroside and sulfatide were used as references. Two lipid mixtures were prepared for training and validation of the quantification model. The composition of lipid extracts that were predicted by the PLS regression of IR spectra was compared with lipid quantification by thin layer chromatography.

3. Concerning an application of the method of least squares with a variable weight matrix

NASA Technical Reports Server (NTRS)

Sukhanov, A. A.

1979-01-01

An estimate of a state vector for a physical system when the weight matrix in the method of least squares is a function of this vector is considered. An iterative procedure is proposed for calculating the desired estimate. Conditions for the existence and uniqueness of the limit of this procedure are obtained, and a domain is found which contains the limit estimate. A second method for calculating the desired estimate which reduces to the solution of a system of algebraic equations is proposed. The question of applying Newton's method of tangents to solving the given system of algebraic equations is considered and conditions for the convergence of the modified Newton's method are obtained. Certain properties of the estimate obtained are presented together with an example.

4. a Robust Pct Method Based on Complex Least Squares Adjustment Method

Haiqiang, F.; Jianjun, Z.; Changcheng, W.; Qinghua, X.; Rong, Z.

2013-07-01

Polarization Coherence Tomography (PCT) method has the good performance in deriving the vegetation vertical structure. However, Errors caused by temporal decorrelation and vegetation height and ground phase always propagate to the data analysis and contaminate the results. In order to overcome this disadvantage, we exploit Complex Least Squares Adjustment Method to compute vegetation height and ground phase based on Random Volume over Ground and Volume Temporal Decorrelation (RVoG + VTD) model. By the fusion of different polarimetric InSAR data, we can use more observations to obtain more robust estimations of temporal decorrelation and vegetation height, and then, we introduce them into PCT to acquire more accurate vegetation vertical structure. Finally the new approach is validated on E-SAR data of Oberpfaffenhofen, Germany. The results demonstrate that the robust method can greatly improve accusation of vegetation vertical structure.

5. Elemental PGNAA analysis using gamma-gamma coincidence counting with the library least-squares approach

Metwally, Walid A.; Gardner, Robin P.; Mayo, Charles W.

2004-01-01

An accurate method for determining elemental analysis using gamma-gamma coincidence counting is presented. To demonstrate the feasibility of this method for PGNAA, a system of three radioisotopes (Na-24, Co-60 and Cs-134) that emit coincident gamma rays was used. Two HPGe detectors were connected to a system that allowed both singles and coincidences to be collected simultaneously. A known mixture of the three radioisotopes was used and data was deliberately collected at relatively high counting rates to determine the effect of pulse pile-up distortion. The results obtained, with the library least-squares analysis, of both the normal and coincidence counting are presented and compared to the known amounts. The coincidence results are shown to give much better accuracy. It appears that in addition to the expected advantage of reduced background, the coincidence approach is considerably more resistant to pulse pile-up distortion.

6. Testing of Lagrange multiplier damped least-squares control algorithm for woofer-tweeter adaptive optics

PubMed Central

Zou, Weiyao; Burns, Stephen A.

2012-01-01

A Lagrange multiplier-based damped least-squares control algorithm for woofer-tweeter (W-T) dual deformable-mirror (DM) adaptive optics (AO) is tested with a breadboard system. We show that the algorithm can complementarily command the two DMs to correct wavefront aberrations within a single optimization process: the woofer DM correcting the high-stroke, low-order aberrations, and the tweeter DM correcting the low-stroke, high-order aberrations. The optimal damping factor for a DM is found to be the median of the eigenvalue spectrum of the influence matrix of that DM. Wavefront control accuracy is maximized with the optimized control parameters. For the breadboard system, the residual wavefront error can be controlled to the precision of 0.03 μm in root mean square. The W-T dual-DM AO has applications in both ophthalmology and astronomy. PMID:22441462

7. UNIMAP: a generalized least-squares map maker for Herschel data

Piazzo, Lorenzo; Calzoletti, Luca; Faustini, Fabiana; Pestalozzi, Michele; Pezzuto, Stefano; Elia, Davide; di Giorgio, Anna; Molinari, Sergio

2015-02-01

The Herschel space telescope hosts two infrared photometers having an unprecedented resolution, sensitivity and dynamic range. The map making, i.e. the formation of sky images from the instruments' data, is critical for the full exploitation of the satellite and is a difficult task, since the readouts are affected by several disturbances, most notably by correlated noise. An effective map making approach is based on generalized least squares (GLS). However, when applied to Herschel data this approach poses several challenges and requires a specific pre- and post-processing. In the paper, we describe these challenges and introduce a set of algorithms and procedures which successfully address the issues. We also describe the implementation of the procedures and how these are integrated into an image formation software called UNIMAP, which is the first GLS map maker capable of automatically producing quality Herschel images with manageable memory and complexity requirements.

8. Least squares parameter estimation methods for material decomposition with energy discriminating detectors

SciTech Connect

Le, Huy Q.; Molloi, Sabee

2011-01-15

Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar to the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg/ml) and iodine (4, 12, 20, 28, 36, and 44 mg/ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30/70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg/ml) and iodine (5, 15, 25, 35, and 45 mg/ml). The x-ray transport process was simulated where the Beer-Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues

9. Multivariate analysis of remote LIBS spectra using partial least squares, principal component analysis, and related techniques

SciTech Connect

Clegg, Samuel M; Barefield, James E; Wiens, Roger C; Sklute, Elizabeth; Dyare, Melinda D

2008-01-01

Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.

10. Low PMEPR OFDM Radar Waveform Design Using the Iterative Least Squares Algorithm

Huang, Tianyao; Zhao, Tong

2015-11-01

This letter considers waveform design of orthogonal frequency division multiplexing (OFDM) signal for radar applications, and aims at mitigating the envelope fluctuation in OFDM. A novel method is proposed to reduce the peak-to-mean envelope power ratio (PMEPR), which is commonly used to evaluate the fluctuation. The proposed method is based on the tone reservation approach, in which some bits or subcarriers of OFDM are allocated for decreasing PMEPR. We introduce the coefficient of variation of envelopes (CVE) as the cost function for waveform optimization, and develop an iterative least squares algorithm. Minimizing CVE leads to distinct PMEPR reduction, and it is guaranteed that the cost function monotonically decreases by applying the iterative algorithm. Simulations demonstrate that the envelope is significantly smoothed by the proposed method.

11. Algorithm 937: MINRES-QLP for Symmetric and Hermitian Linear Equations and Least-Squares Problems.

PubMed

Choi, Sou-Cheng T; Saunders, Michael A

2014-02-01

We describe algorithm MINRES-QLP and its FORTRAN 90 implementation for solving symmetric or Hermitian linear systems or least-squares problems. If the system is singular, MINRES-QLP computes the unique minimum-length solution (also known as the pseudoinverse solution), which generally eludes MINRES. In all cases, it overcomes a potential instability in the original MINRES algorithm. A positive-definite pre-conditioner may be supplied. Our FORTRAN 90 implementation illustrates a design pattern that allows users to make problem data known to the solver but hidden and secure from other program units. In particular, we circumvent the need for reverse communication. Example test programs input and solve real or complex problems specified in Matrix Market format. While we focus here on a FORTRAN 90 implementation, we also provide and maintain MATLAB versions of MINRES and MINRES-QLP. PMID:25328255

12. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

NASA Technical Reports Server (NTRS)

Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

2016-01-01

Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

13. First-order system least squares for the pure traction problem in planar linear elasticity

SciTech Connect

Cai, Z.; Manteuffel, T.; McCormick, S.; Parter, S.

1996-12-31

This talk will develop two first-order system least squares (FOSLS) approaches for the solution of the pure traction problem in planar linear elasticity. Both are two-stage algorithms that first solve for the gradients of displacement, then for the displacement itself. One approach, which uses L{sup 2} norms to define the FOSLS functional, is shown under certain H{sup 2} regularity assumptions to admit optimal H{sup 1}-like performance for standard finite element discretization and standard multigrid solution methods that is uniform in the Poisson ratio for all variables. The second approach, which is based on H{sup -1} norms, is shown under general assumptions to admit optimal uniform performance for displacement flux in an L{sup 2} norm and for displacement in an H{sup 1} norm. These methods do not degrade as other methods generally do when the material properties approach the incompressible limit.

14. Constrained least-squares estimation in deconvolution from wave-front sensing

Ford, S. D.; Welsh, B. M.; Roggemann, M. C.

1998-05-01

We address the optimal processing of astronomical images using the deconvolution from wave-front sensing technique (DWFS). A constrained least-squares (CLS) solution which incorporates ensemble average DWFS data is derived using Lagrange minimization. The new estimator requires DWFS data, noise statistics, OTF statistics, and a constraint. The constraint can be chosen such that the algorithm selects a conventional regularization constant automatically. No ad hoc parameter tuning is necessary. The algorithm uses an iterative Newton-Raphson minimization to determine the optimal Lagrange multiplier. Computer simulation of a 1 m telescope imaging through atmospheric turbulence is used to test the estimation scheme. CLS object estimates are compared with those processed via manual tuning of the regularization constant. The CLS algorithm provides images with comparable resolution and is computationally inexpensive, converging to a solution in less than 10 iterations.

15. Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model

Yu, Lean; Wang, Shouyang; Lai, K. K.

Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.

16. A least-squares finite element method for 3D incompressible Navier-Stokes equations

NASA Technical Reports Server (NTRS)

Jiang, Bo-Nan; Lin, T. L.; Hou, Lin-Jun; Povinelli, Louis A.

1993-01-01

The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations, and results in symmetric, positive definite algebraic system. An additional compatibility equation, i.e., the divergence of vorticity vector should be zero, is included to make the first-order system elliptic. The Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. The flow in a half of 3D cubic cavity is calculated at Re = 100, 400, and 1,000 with 50 x 52 x 25 trilinear elements. The Taylor-Gortler-like vortices are observed at Re = 1,000.

17. Simultaneous evaluation of interrelated cross sections by generalized least-squares and related data file requirements

SciTech Connect

Poenitz, W.P.

1984-10-25

Though several cross sections have been designated as standards, they are not basic units and are interrelated by ratio measurements. Moreover, as such interactions as /sup 6/Li + n and /sup 10/B + n involve only two and three cross sections respectively, total cross section data become useful for the evaluation process. The problem can be resolved by a simultaneous evaluation of the available absolute and shape data for cross sections, ratios, sums, and average cross sections by generalized least-squares. A data file is required for such evaluation which contains the originally measured quantities and their uncertainty components. Establishing such a file is a substantial task because data were frequently reported as absolute cross sections where ratios were measured without sufficient information on which reference cross section and which normalization were utilized. Reporting of uncertainties is often missing or incomplete. The requirements for data reporting will be discussed.

18. A Least-Squares Finite Element Method for Electromagnetic Scattering Problems

NASA Technical Reports Server (NTRS)

Wu, Jie; Jiang, Bo-nan

1996-01-01

The least-squares finite element method (LSFEM) is applied to electromagnetic scattering and radar cross section (RCS) calculations. In contrast to most existing numerical approaches, in which divergence-free constraints are omitted, the LSFF-M directly incorporates two divergence equations in the discretization process. The importance of including the divergence equations is demonstrated by showing that otherwise spurious solutions with large divergence occur near the scatterers. The LSFEM is based on unstructured grids and possesses full flexibility in handling complex geometry and local refinement Moreover, the LSFEM does not require any special handling, such as upwinding, staggered grids, artificial dissipation, flux-differencing, etc. Implicit time discretization is used and the scheme is unconditionally stable. By using a matrix-free iterative method, the computational cost and memory requirement for the present scheme is competitive with other approaches. The accuracy of the LSFEM is verified by several benchmark test problems.

19. First-Order System Least Squares for the Stokes Equations, with Application to Linear Elasticity

NASA Technical Reports Server (NTRS)

Cai, Z.; Manteuffel, T. A.; McCormick, S. F.

1996-01-01

Following our earlier work on general second-order scalar equations, here we develop a least-squares functional for the two- and three-dimensional Stokes equations, generalized slightly by allowing a pressure term in the continuity equation. By introducing a velocity flux variable and associated curl and trace equations, we are able to establish ellipticity in an H(exp 1) product norm appropriately weighted by the Reynolds number. This immediately yields optimal discretization error estimates for finite element spaces in this norm and optimal algebraic convergence estimates for multiplicative and additive multigrid methods applied to the resulting discrete systems. Both estimates are uniform in the Reynolds number. Moreover, our pressure-perturbed form of the generalized Stokes equations allows us to develop an analogous result for the Dirichlet problem for linear elasticity with estimates that are uniform in the Lame constants.

20. Slip distribution of the 2010 Mentawai earthquake from GPS observation using least squares inversion method

2016-05-01

Continuous Global Positioning System (GPS) observations showed significant crustal displacements as a result of the 2010 Mentawai earthquake. The Least Square Inversion method of Mentawai earthquake slip distribution from SuGAR observations yielded in an optimum value of slip distribution by giving a weight of smoothing constraint and a weight of slip value constraint = 0 at the edge of the earthquake rupture area. A maximum coseismic slip of the inversion calculation was 1.997 m and concentrated around stations PRKB (Pagai Island). In addition, the values of dip-slip direction tend to be more dominant. The seismic moment calculated from the slip distribution was 6.89 × 10E+20 Nm, which is equivalent to a magnitude of 7.8.

1. Underwater terrain positioning method based on least squares estimation for AUV

Chen, Peng-yun; Li, Ye; Su, Yu-min; Chen, Xiao-long; Jiang, Yan-qing

2015-12-01

To achieve accurate positioning of autonomous underwater vehicles, an appropriate underwater terrain database storage format for underwater terrain-matching positioning is established using multi-beam data as underwater terrainmatching data. An underwater terrain interpolation error compensation method based on fractional Brownian motion is proposed for defects of normal terrain interpolation, and an underwater terrain-matching positioning method based on least squares estimation (LSE) is proposed for correlation analysis of topographic features. The Fisher method is introduced as a secondary criterion for pseudo localization appearing in a topographic features flat area, effectively reducing the impact of pseudo positioning points on matching accuracy and improving the positioning accuracy of terrain flat areas. Simulation experiments based on electronic chart and multi-beam sea trial data show that drift errors of an inertial navigation system can be corrected effectively using the proposed method. The positioning accuracy and practicality are high, satisfying the requirement of underwater accurate positioning.

2. Evaluation of TDRSS-user orbit determination accuracy using batch least-squares and sequential methods

NASA Technical Reports Server (NTRS)

Oza, D. H.; Jones, T. L.; Hodjatzadeh, M.; Samii, M. V.; Doll, C. E.; Hart, R. C.; Mistretta, G. D.

1991-01-01

The development of the Real-Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination on a Disk Operating System (DOS) based Personal Computer (PC) is addressed. The results of a study to compare the orbit determination accuracy of a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOD/E with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), is addressed. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for the Earth Radiation Budget Satellite (ERBS); the maximum solution differences were less than 25 m after the filter had reached steady state.

3. Online soft sensor of humidity in PEM fuel cell based on dynamic partial least squares.

PubMed

Long, Rong; Chen, Qihong; Zhang, Liyan; Ma, Longhua; Quan, Shuhai

2013-01-01

Online monitoring humidity in the proton exchange membrane (PEM) fuel cell is an important issue in maintaining proper membrane humidity. The cost and size of existing sensors for monitoring humidity are prohibitive for online measurements. Online prediction of humidity using readily available measured data would be beneficial to water management. In this paper, a novel soft sensor method based on dynamic partial least squares (DPLS) regression is proposed and applied to humidity prediction in PEM fuel cell. In order to obtain data of humidity and test the feasibility of the proposed DPLS-based soft sensor a hardware-in-the-loop (HIL) test system is constructed. The time lag of the DPLS-based soft sensor is selected as 30 by comparing the root-mean-square error in different time lag. The performance of the proposed DPLS-based soft sensor is demonstrated by experimental results. PMID:24453923

4. Algorithm 937: MINRES-QLP for Symmetric and Hermitian Linear Equations and Least-Squares Problems

PubMed Central

Choi, Sou-Cheng T.; Saunders, Michael A.

2014-01-01

We describe algorithm MINRES-QLP and its FORTRAN 90 implementation for solving symmetric or Hermitian linear systems or least-squares problems. If the system is singular, MINRES-QLP computes the unique minimum-length solution (also known as the pseudoinverse solution), which generally eludes MINRES. In all cases, it overcomes a potential instability in the original MINRES algorithm. A positive-definite pre-conditioner may be supplied. Our FORTRAN 90 implementation illustrates a design pattern that allows users to make problem data known to the solver but hidden and secure from other program units. In particular, we circumvent the need for reverse communication. Example test programs input and solve real or complex problems specified in Matrix Market format. While we focus here on a FORTRAN 90 implementation, we also provide and maintain MATLAB versions of MINRES and MINRES-QLP. PMID:25328255

5. Dehazing for single image with sky region via self-adaptive weighted least squares model

Wang, Dan; Zhu, Jubo; Yan, Fengxia

2016-04-01

The physical imaging model, which is based on atmospheric absorption and scattering, plays an important role in single-image dehazing. It is critical that the transmission is accurately estimated for the dehazing algorithm based on the physical imaging model. A self-adaptive weighted least squares (AWLS) model is proposed to refine the rough transmission, which is extracted by the dark channel (DC) model. In our model, the gray-world hypothesis and a smoothing technique with edge preservation are integrated to optimize the transmission and remove the artifacts which are brought by the DC model. The self-AWLS model has higher computational efficiency and can prevent the distortion of the recovered image better when the hazy image contains sky region, while many other dehazing techniques are not applicable for such images. Experimental results show that the proposed model is both effective and efficient for the haze removal application.

6. Online Soft Sensor of Humidity in PEM Fuel Cell Based on Dynamic Partial Least Squares

PubMed Central

Long, Rong; Chen, Qihong; Zhang, Liyan; Ma, Longhua; Quan, Shuhai

2013-01-01

Online monitoring humidity in the proton exchange membrane (PEM) fuel cell is an important issue in maintaining proper membrane humidity. The cost and size of existing sensors for monitoring humidity are prohibitive for online measurements. Online prediction of humidity using readily available measured data would be beneficial to water management. In this paper, a novel soft sensor method based on dynamic partial least squares (DPLS) regression is proposed and applied to humidity prediction in PEM fuel cell. In order to obtain data of humidity and test the feasibility of the proposed DPLS-based soft sensor a hardware-in-the-loop (HIL) test system is constructed. The time lag of the DPLS-based soft sensor is selected as 30 by comparing the root-mean-square error in different time lag. The performance of the proposed DPLS-based soft sensor is demonstrated by experimental results. PMID:24453923

7. A library least-squares approach for scatter correction in gamma-ray tomography

Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro

2015-03-01

Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.

8. Quantitative infrared spectroscopy of glucose in blood using partial least-squares analyses

SciTech Connect

Ward, K.J.; Haaland, D.M.; Robinson, M.R.; Eaton, R.P.

1989-01-01

The concentration of glucose in drawn samples of human blood has been determined using attenuated total reflectance (ATR) Fourier transform infrared (FT-IR) spectroscopy and partial least-squares (PLS) multivariate calibration. A twelve sample calibration set over the physiological glucose range of 50-400 mg/deciliter (dl) resulted in an average error of 5.2 mg/dl. These results were obtained using a cross validated PLS calibration over all infrared data in the frequency range of 950-1200 cm/sup /minus/1/. These results are a dramatic improvement relative to those obtained by previous studies of this system using univariate peak height analyses. 3 refs., 3 figs.

9. Sequential Least-Squares Using Orthogonal Transformations. [spacecraft communication/spacecraft tracking-data smoothing

NASA Technical Reports Server (NTRS)

Bierman, G. J.

1975-01-01

Square root information estimation, starting from its beginnings in least-squares parameter estimation, is considered. Special attention is devoted to discussions of sensitivity and perturbation matrices, computed solutions and their formal statistics, consider-parameters and consider-covariances, and the effects of a priori statistics. The constant-parameter model is extended to include time-varying parameters and process noise, and the error analysis capabilities are generalized. Efficient and elegant smoothing results are obtained as easy consequences of the filter formulation. The value of the techniques is demonstrated by the navigation results that were obtained for the Mariner Venus-Mercury (Mariner 10) multiple-planetary space probe and for the Viking Mars space mission.

10. The least-squares finite element method for low-mach-number compressible viscous flows

NASA Technical Reports Server (NTRS)

Yu, Sheng-Tao

1994-01-01

The present paper reports the development of the Least-Squares Finite Element Method (LSFEM) for simulating compressible viscous flows at low Mach numbers in which the incompressible flows pose as an extreme. Conventional approach requires special treatments for low-speed flows calculations: finite difference and finite volume methods are based on the use of the staggered grid or the preconditioning technique; and, finite element methods rely on the mixed method and the operator-splitting method. In this paper, however, we show that such difficulty does not exist for the LSFEM and no special treatment is needed. The LSFEM always leads to a symmetric, positive-definite matrix through which the compressible flow equations can be effectively solved. Two numerical examples are included to demonstrate the method: first, driven cavity flows at various Reynolds numbers; and, buoyancy-driven flows with significant density variation. Both examples are calculated by using full compressible flow equations.

11. Improvement of high-order least-squares integration method for stereo deflectometry.

PubMed

Ren, Hongyu; Gao, Feng; Jiang, Xiangqian

2015-12-01

Stereo deflectometry is defined as measurement of the local slope of specular surfaces by using two CCD cameras as detectors and one LCD screen as a light source. For obtaining 3D topography, integrating the calculated slope data is needed. Currently, a high-order finite-difference-based least-squares integration (HFLI) method is used to improve the integration accuracy. However, this method cannot be easily implemented in circular domain or when gradient data are incomplete. This paper proposes a modified easy-implementation integration method based on HFLI (EI-HFLI), which can work in arbitrary domains, and can directly and conveniently handle incomplete gradient data. To carry out the proposed algorithm in a practical stereo deflectometry measurement, gradients are calculated in both CCD frames, and then are mixed together as original data to be meshed into rectangular grids format. Simulation and experiments show this modified method is feasible and can work efficiently. PMID:26836684

12. Comparison of SIRT and SQS for Regularized Weighted Least Squares Image Reconstruction

PubMed Central

Gregor, Jens; Fessler, Jeffrey A.

2015-01-01

Tomographic image reconstruction is often formulated as a regularized weighted least squares (RWLS) problem optimized by iterative algorithms that are either inherently algebraic or derived from a statistical point of view. This paper compares a modified version of SIRT (Simultaneous Iterative Reconstruction Technique), which is of the former type, with a version of SQS (Separable Quadratic Surrogates), which is of the latter type. We show that the two algorithms minimize the same criterion function using similar forms of preconditioned gradient descent. We present near-optimal relaxation for both based on eigenvalue bounds and include a heuristic extension for use with ordered subsets. We provide empirical evidence that SIRT and SQS converge at the same rate for all intents and purposes. For context, we compare their performance with an implementation of preconditioned conjugate gradient. The illustrative application is X-ray CT of luggage for aviation security. PMID:26478906

13. A compact and accurate semi-global potential energy surface for malonaldehyde from constrained least squares regression

SciTech Connect

Mizukami, Wataru Tew, David P.; Habershon, Scott

2014-10-14

We present a new approach to semi-global potential energy surface fitting that uses the least absolute shrinkage and selection operator (LASSO) constrained least squares procedure to exploit an extremely flexible form for the potential function, while at the same time controlling the risk of overfitting and avoiding the introduction of unphysical features such as divergences or high-frequency oscillations. Drawing from a massively redundant set of overlapping distributed multi-dimensional Gaussian functions of inter-atomic separations we build a compact full-dimensional surface for malonaldehyde, fit to explicitly correlated coupled cluster CCSD(T)(F12*) energies with a root mean square deviations accuracy of 0.3%–0.5% up to 25 000 cm{sup −1} above equilibrium. Importance-sampled diffusion Monte Carlo calculations predict zero point energies for malonaldehyde and its deuterated isotopologue of 14 715.4(2) and 13 997.9(2) cm{sup −1} and hydrogen transfer tunnelling splittings of 21.0(4) and 3.2(4) cm{sup −1}, respectively, which are in excellent agreement with the experimental values of 21.583 and 2.915(4) cm{sup −1}.

14. A FORTRAN 77 computer program for the least-squares analysis of chemical data in Pearce variation diagrams

Russell, J. K.

Molar-element ratios (intensive variables) plotted in X- Y variation diagrams have the attribute of reflecting the actual relationships existing between components, if the denominator selected has a constant value. Such illustrations (Pearce variation diagrams) are useful particularly in examining data from igneous-rock suites. Analyses from rock suites usually are scrutinized for chemical variations that have been generated by simple igneous processes. Usually the variations in major, minor, and trace-element chemistry are capable of being described or approximated by a linear regression. A FORTRAN computer program to run on a CDC CYBER 170 computer is presented that generates the necessary molar-element ratios for Pearce diagrams. A best-fit straight line is determined for a specified X variable and all other possible Y variables. The calculated curve is determined by least-squares techniques that minimize the distance perpendicular to the calculated regression. Variances on the linear regression parameters (slope and intercept) are calculated as are confidence limits on the position of the fitted line.

15. New prediction-augmented classical least squares (PACLS) methods: Application to unmodeled interferents

SciTech Connect

HAALAND,DAVID M.; MELGAARD,DAVID K.

2000-01-26

A significant improvement to the classical least squares (CLS) multivariate analysis method has been developed. The new method, called prediction-augmented classical least squares (PACLS), removes the restriction for CLS that all interfering spectral species must be known and their concentrations included during the calibration. The authors demonstrate that PACLS can correct inadequate CLS models if spectral components left out of the calibration can be identified and if their spectral shapes can be derived and added during a PACLS prediction step. The new PACLS method is demonstrated for a system of dilute aqueous solutions containing urea, creatinine, and NaCl analytes with and without temperature variations. The authors demonstrate that if CLS calibrations are performed using only a single analyte's concentration, then there is little, if any, prediction ability. However, if pure-component spectra of analytes left out of the calibration are independently obtained and added during PACLS prediction, then the CLS prediction ability is corrected and predictions become comparable to that of a CLS calibration that contains all analyte concentrations. It is also demonstrated that constant-temperature CLS models can be used to predict variable-temperature data by employing the PACLS method augmented by the spectral shape of a temperature change of the water solvent. In this case, PACLS can also be used to predict sample temperature with a standard error of prediction of 0.07 C even though the calibration data did not contain temperature variations. The PACLS method is also shown to be capable of modeling system drift to maintain a calibration in the presence of spectrometer drift.

16. The use of least squares methods in functional optimization of energy use prediction models

Bourisli, Raed I.; Al-Shammeri, Basma S.; AlAnzi, Adnan A.

2012-06-01

The least squares method (LSM) is used to optimize the coefficients of a closed-form correlation that predicts the annual energy use of buildings based on key envelope design and thermal parameters. Specifically, annual energy use is related to a number parameters like the overall heat transfer coefficients of the wall, roof and glazing, glazing percentage, and building surface area. The building used as a case study is a previously energy-audited mosque in a suburb of Kuwait City, Kuwait. Energy audit results are used to fine-tune the base case mosque model in the VisualDOE{trade mark, serif} software. Subsequently, 1625 different cases of mosques with varying parameters were developed and simulated in order to provide the training data sets for the LSM optimizer. Coefficients of the proposed correlation are then optimized using multivariate least squares analysis. The objective is to minimize the difference between the correlation-predicted results and the VisualDOE-simulation results. It was found that the resulting correlation is able to come up with coefficients for the proposed correlation that reduce the difference between the simulated and predicted results to about 0.81%. In terms of the effects of the various parameters, the newly-defined weighted surface area parameter was found to have the greatest effect on the normalized annual energy use. Insulating the roofs and walls also had a major effect on the building energy use. The proposed correlation and methodology can be used during preliminary design stages to inexpensively assess the impacts of various design variables on the expected energy use. On the other hand, the method can also be used by municipality officials and planners as a tool for recommending energy conservation measures and fine-tuning energy codes.

17. On recursive least-squares filtering algorithms and implementations. Ph.D. Thesis

NASA Technical Reports Server (NTRS)

Hsieh, Shih-Fu

1990-01-01

In many real-time signal processing applications, fast and numerically stable algorithms for solving least-squares problems are necessary and important. In particular, under non-stationary conditions, these algorithms must be able to adapt themselves to reflect the changes in the system and take appropriate adjustments to achieve optimum performances. Among existing algorithms, the QR-decomposition (QRD)-based recursive least-squares (RLS) methods have been shown to be useful and effective for adaptive signal processing. In order to increase the speed of processing and achieve high throughput rate, many algorithms are being vectorized and/or pipelined to facilitate high degrees of parallelism. A time-recursive formulation of RLS filtering employing block QRD will be considered first. Several methods, including a new non-continuous windowing scheme based on selectively rejecting contaminated data, were investigated for adaptive processing. Based on systolic triarrays, many other forms of systolic arrays are shown to be capable of implementing different algorithms. Various updating and downdating systolic algorithms and architectures for RLS filtering are examined and compared in details, which include Householder reflector, Gram-Schmidt procedure, and Givens rotation. A unified approach encompassing existing square-root-free algorithms is also proposed. For the sinusoidal spectrum estimation problem, a judicious method of separating the noise from the signal is of great interest. Various truncated QR methods are proposed for this purpose and compared to the truncated SVD method. Computer simulations provided for detailed comparisons show the effectiveness of these methods. This thesis deals with fundamental issues of numerical stability, computational efficiency, adaptivity, and VLSI implementation for the RLS filtering problems. In all, various new and modified algorithms and architectures are proposed and analyzed; the significance of any of the new method depends

18. Temporal gravity field modeling based on least square collocation with short-arc approach

ran, jiangjun; Zhong, Min; Xu, Houze; Liu, Chengshu; Tangdamrongsub, Natthachet

2014-05-01

After the launch of the Gravity Recovery And Climate Experiment (GRACE) in 2002, several research centers have attempted to produce the finest gravity model based on different approaches. In this study, we present an alternative approach to derive the Earth's gravity field, and two main objectives are discussed. Firstly, we seek the optimal method to estimate the accelerometer parameters, and secondly, we intend to recover the monthly gravity model based on least square collocation method. The method has been paid less attention compared to the least square adjustment method because of the massive computational resource's requirement. The positions of twin satellites are treated as pseudo-observations and unknown parameters at the same time. The variance covariance matrices of the pseudo-observations and the unknown parameters are valuable information to improve the accuracy of the estimated gravity solutions. Our analyses showed that introducing a drift parameter as an additional accelerometer parameter, compared to using only a bias parameter, leads to a significant improvement of our estimated monthly gravity field. The gravity errors outside the continents are significantly reduced based on the selected set of the accelerometer parameters. We introduced the improved gravity model namely the second version of Institute of Geodesy and Geophysics, Chinese Academy of Sciences (IGG-CAS 02). The accuracy of IGG-CAS 02 model is comparable to the gravity solutions computed from the Geoforschungszentrum (GFZ), the Center for Space Research (CSR) and the NASA Jet Propulsion Laboratory (JPL). In term of the equivalent water height, the correlation coefficients over the study regions (the Yangtze River valley, the Sahara desert, and the Amazon) among four gravity models are greater than 0.80.

19. Least-squares dual characterization for ROI assessment in emission tomography

Ben Bouallègue, F.; Crouzet, J. F.; Dubois, A.; Buvat, I.; Mariano-Goulart, D.

2013-06-01

Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.

20. [A hyperspectral subpixel target detection method based on inverse least squares method].

PubMed

Li, Qing-Bo; Nie, Xin; Zhang, Guang-Jun

2009-01-01

In the present paper, an inverse least square (ILS) method combined with the Mahalanobis distance outlier detection method is discussed to detect the subpixel target from the hyperspectral image. Firstly, the inverse model for the target spectrum and all the pixel spectra was established, in which the accurate target spectrum was obtained previously, and then the SNV algorithm was employed to preprocess each original pixel spectra separately. After the pretreatment, the regressive coefficient of ILS was calculated with partial least square (PLS) algorithm. Each point in the vector of regressive coefficient corresponds to a pixel in the image. The Mahalanobis distance was calculated with each point in the regressive coefficient vector. Because Mahalanobis distance stands for the extent to which samples deviate from the total population, the point with Mahalanobis distance larger than the 3sigma was regarded as the subpixel target. In this algorithm, no other prior information such as representative background spectrum or modeling of background is required, and only the target spectrum is needed. In addition, the result of the detection is insensitive to the complexity of background. This method was applied to AVIRIS remote sensing data. For this simulation experiment, AVIRIS remote sensing data was free downloaded from the NASA official websit, the spectrum of a ground object in the AVIRIS hyperspectral image was picked up as the target spectrum, and the subpixel target was simulated though a linear mixed method. The comparison of the subpixel detection result of the method mentioned above with that of orthogonal subspace projection method (OSP) was performed. The result shows that the performance of the ILS method is better than the traditional OSP method. The ROC (receive operating characteristic curve) and SNR were calculated, which indicates that the ILS method possesses higher detection accuracy and less computing time than the OSP algorithm. PMID:19385196

1. Iterative weighting of multiblock data in the orthogonal partial least squares framework.

PubMed

Boccard, Julien; Rutledge, Douglas N

2014-02-27

The integration of multiple data sources has emerged as a pivotal aspect to assess complex systems comprehensively. This new paradigm requires the ability to separate common and redundant from specific and complementary information during the joint analysis of several data blocks. However, inherent problems encountered when analysing single tables are amplified with the generation of multiblock datasets. Finding the relationships between data layers of increasing complexity constitutes therefore a challenging task. In the present work, an algorithm is proposed for the supervised analysis of multiblock data structures. It associates the advantages of interpretability from the orthogonal partial least squares (OPLS) framework and the ability of common component and specific weights analysis (CCSWA) to weight each data table individually in order to grasp its specificities and handle efficiently the different sources of Y-orthogonal variation. Three applications are proposed for illustration purposes. A first example refers to a quantitative structure-activity relationship study aiming to predict the binding affinity of flavonoids toward the P-glycoprotein based on physicochemical properties. A second application concerns the integration of several groups of sensory attributes for overall quality assessment of a series of red wines. A third case study highlights the ability of the method to combine very large heterogeneous data blocks from Omics experiments in systems biology. Results were compared to the reference multiblock partial least squares (MBPLS) method to assess the performance of the proposed algorithm in terms of predictive ability and model interpretability. In all cases, ComDim-OPLS was demonstrated as a relevant data mining strategy for the simultaneous analysis of multiblock structures by accounting for specific variation sources in each dataset and providing a balance between predictive and descriptive purpose. PMID:24528656

2. Chaotic time series prediction for prenatal exposure to polychlorinated biphenyls in umbilical cord blood using the least squares SEATR model.

PubMed

Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia

2016-01-01

Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme's validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field. PMID:27118260

3. Parameterized least-squares attitude history estimation and magnetic field observations of the auroral spatial structures probe

Martineau, Ryan J.

Terrestrial auroras are visible-light events caused by charged particles trapped by the Earth's magnetic field precipitating into the atmosphere along magnetic field lines near the poles. Auroral events are very dynamic, changing rapidly in time and across large spatial scales. Better knowledge of the flow of energy during an aurora will improve understanding of the heating processes in the atmosphere during geomagnetic and solar storms. The Auroral Spatial Structures Probe is a sounding rocket campaign to observe the middle-atmosphere plasma and electromagnetic environment during an auroral event with multipoint simultaneous measurements for fine temporal and spatial resolution. The auroral event in question occurred on January 28, 2015, with liftoff of the rocket at 10:41:01 UTC. The goal of this thesis is to produce clear observations of the magnetic field that may be used to model the current systems of the auroral event. To achieve this, the attitude of ASSP's 7 independent payloads must be estimated, and a new attitude determination method is attempted. The new solution uses nonlinear least-squares parameter estimation with a rigid-body dynamics simulation to determine attitude with an estimated accuracy of a few degrees. Observed magnetic field perturbations found using the new attitude solution are presented, where structures of the perturbations are consistent with previous observations and electromagnetic theory.

4. Chaotic time series prediction for prenatal exposure to polychlorinated biphenyls in umbilical cord blood using the least squares SEATR model

PubMed Central

Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia

2016-01-01

Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme’s validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field. PMID:27118260

5. A Regularized Total Least Square Method for Snow Profiles Retrievals from Radar Back Scattering of CLOUDSAT Data

Koner, P. K.; Battaglia, A.; Simmer, C.

2008-05-01

Snowfall represents a key component of the water cycle in high latitude/polar regions. The particle size and shape distribution is the key parameter to understand and retrieve snow properties from remote satellite measurements. A regularized total least squares (RTLS) method is successfully introduced in a retrieval of snow profiling size distribution parameters based only on W-band radar back-scattering profiles. The RTLS method is extensively used in a variety of scientific disciplines (where both the Jacobian and the measurement vectors are contaminated by noise) such as signal processing, automatic control, statistics, economy, biology, medicine and remote sensing trace gases retrievals as well. The forward model uses DDA-based backscattering look-up tables of non-spherical particles with temperature-dependent parameterizations of the size and shape distributions of the snow particles derived from up-to-date in situ measurements of snow events. Synthetic retrievals (with all the retrieval variables known) are presented first to assess the potential of the new technique. Then the retrieval is performed using the measured radar reflectivity factor by the 94 GHz CloudSat Profiling Radar with additional information about the temperature profile. The number density and size distribution width have been solved by minimizing the distance between the measured data and the simulated forward model runs. We have used the RTLS method for an optimum regularization and the Gauss-Newton adjunct with mixed "line search" methods to solve the nonlinearity of the forward model.

6. Chaotic time series prediction for prenatal exposure to polychlorinated biphenyls in umbilical cord blood using the least squares SEATR model

Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia

2016-04-01

Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme’s validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field.

7. Multi-Window Classical Least Squares Multivariate Calibration Methods for Quantitative ICP-AES Analyses

SciTech Connect

CHAMBERS,WILLIAM B.; HAALAND,DAVID M.; KEENAN,MICHAEL R.; MELGAARD,DAVID K.

1999-10-01

The advent of inductively coupled plasma-atomic emission spectrometers (ICP-AES) equipped with charge-coupled-device (CCD) detector arrays allows the application of multivariate calibration methods to the quantitative analysis of spectral data. We have applied classical least squares (CLS) methods to the analysis of a variety of samples containing up to 12 elements plus an internal standard. The elements included in the calibration models were Ag, Al, As, Au, Cd, Cr, Cu, Fe, Ni, Pb, Pd, and Se. By performing the CLS analysis separately in each of 46 spectral windows and by pooling the CLS concentration results for each element in all windows in a statistically efficient manner, we have been able to significantly improve the accuracy and precision of the ICP-AES analyses relative to the univariate and single-window multivariate methods supplied with the spectrometer. This new multi-window CLS (MWCLS) approach simplifies the analyses by providing a single concentration determination for each element from all spectral windows. Thus, the analyst does not have to perform the tedious task of reviewing the results from each window in an attempt to decide the correct value among discrepant analyses in one or more windows for each element. Furthermore, it is not necessary to construct a spectral correction model for each window prior to calibration and analysis: When one or more interfering elements was present, the new MWCLS method was able to reduce prediction errors for a selected analyte by more than 2 orders of magnitude compared to the worst case single-window multivariate and univariate predictions. The MWCLS detection limits in the presence of multiple interferences are 15 rig/g (i.e., 15 ppb) or better for each element. In addition, errors with the new method are only slightly inflated when only a single target element is included in the calibration (i.e., knowledge of all other elements is excluded during calibration). The MWCLS method is found to be vastly

8. Correspondence and Least Squares Analyses of Soil and Rock Compositions for the Viking Lander 1 and Pathfinder Sites

NASA Technical Reports Server (NTRS)

Larsen, K. W.; Arvidson, R. E.; Jolliff, B. L.; Clark, B. C.

2000-01-01

Correspondence and Least Squares Mixing Analysis techniques are applied to the chemical composition of Viking 1 soils and Pathfinder rocks and soils. Implications for the parent composition of local and global materials are discussed.

9. Application of Partial Least Squares (PLS) Regression to Determine Landscape-Scale Aquatic Resource Vulnerability in the Ozark Mountains

EPA Science Inventory

Partial least squares (PLS) analysis offers a number of advantages over the more traditionally used regression analyses applied in landscape ecology to study the associations among constituents of surface water and landscapes. Common data problems in ecological studies include: s...

10. Application of Partial Least Square (PLS) Regression to Determine Landscape-Scale Aquatic Resources Vulnerability in the Ozark Mountains

EPA Science Inventory

Partial least squares (PLS) analysis offers a number of advantages over the more traditionally used regression analyses applied in landscape ecology, particularly for determining the associations among multiple constituents of surface water and landscape configuration. Common dat...

11. Using the Monte Carlo - Library Least-Squares (MCLLS) approach for the in vivo XRF measurement of lead in bone

Guo, Weijun; Gardner, Robin P.; Todd, Andrew C.

2004-01-01

The Monte Carlo - Library Least-Squares (MCLLS) method has been developed by the Center for Engineering Applications of Radioisotopes for various XRF applications of multi-elemental composition analysis and implemented with the CEARXRF code. In the present work, it is successfully applied to the in vivo XRF measurement of lead in bone and benchmarked by the measurement of a plaster of Paris phantom of known lead concentration. It is implicitly assumed that if the approach works for this sample that closely approximates the real problem of interest, it will also work for the real in vivo case when the proper description of the real case is used. Traditional techniques for XRF analysis are reviewed briefly and the full advantages of the MCLLS method are discussed. Simulation results are presented that are in good agreement with experimental results. The applicability of the MCLLS method to the lead in bone measurement is supported by the good fitting results obtained with simulated Monte Carlo elemental library spectra and close agreement between simulated and experimental spectra from a calcium-rich matrix-based calibration standard in a test geometrical configuration.

12. Applicability of Monte Carlo cross validation technique for model development and validation using generalised least squares regression

2013-03-01

SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.

13. Fuzzy C-mean clustering on kinetic parameter estimation with generalized linear least square algorithm in SPECT

Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan

2006-03-01

Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.

14. Distributed Least-Squares Estimation of a Remote Chemical Source via Convex Combination in Wireless Sensor Networks

PubMed Central

Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun

2014-01-01

This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source. PMID:24977387

15. Isotope pattern deconvolution for peptide mass spectrometry by non-negative least squares/least absolute deviation template matching

PubMed Central

2012-01-01

Background The robust identification of isotope patterns originating from peptides being analyzed through mass spectrometry (MS) is often significantly hampered by noise artifacts and the interference of overlapping patterns arising e.g. from post-translational modifications. As the classification of the recorded data points into either ‘noise’ or ‘signal’ lies at the very root of essentially every proteomic application, the quality of the automated processing of mass spectra can significantly influence the way the data might be interpreted within a given biological context. Results We propose non-negative least squares/non-negative least absolute deviation regression to fit a raw spectrum by templates imitating isotope patterns. In a carefully designed validation scheme, we show that the method exhibits excellent performance in pattern picking. It is demonstrated that the method is able to disentangle complicated overlaps of patterns. Conclusions We find that regularization is not necessary to prevent overfitting and that thresholding is an effective and user-friendly way to perform feature selection. The proposed method avoids problems inherent in regularization-based approaches, comes with a set of well-interpretable parameters whose default configuration is shown to generalize well without the need for fine-tuning, and is applicable to spectra of different platforms. The R package IPPD implements the method and is available from the Bioconductor platform (http://bioconductor.fhcrc.org/help/bioc-views/devel/bioc/html/IPPD.html). PMID:23137144

16. Multi-phase classification by a least-squares support vector machine approach in tomography images of geological samples

Khan, Faisal; Enzmann, Frieder; Kersten, Michael

2016-03-01

Image processing of X-ray-computed polychromatic cone-beam micro-tomography (μXCT) data of geological samples mainly involves artefact reduction and phase segmentation. For the former, the main beam-hardening (BH) artefact is removed by applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. A Matlab code for this approach is provided in the Appendix. The final BH-corrected image is extracted from the residual data or from the difference between the surface elevation values and the original grey-scale values. For the segmentation, we propose a novel least-squares support vector machine (LS-SVM, an algorithm for pixel-based multi-phase classification) approach. A receiver operating characteristic (ROC) analysis was performed on BH-corrected and uncorrected samples to show that BH correction is in fact an important prerequisite for accurate multi-phase classification. The combination of the two approaches was thus used to classify successfully three different more or less complex multi-phase rock core samples.

17. Simultaneous determination of penicillin G salts by infrared spectroscopy: Evaluation of combining orthogonal signal correction with radial basis function-partial least squares regression

Talebpour, Zahra; Tavallaie, Roya; Ahmadi, Seyyed Hamid; Abdollahpour, Assem

2010-09-01

In this study, a new method for the simultaneous determination of penicillin G salts in pharmaceutical mixture via FT-IR spectroscopy combined with chemometrics was investigated. The mixture of penicillin G salts is a complex system due to similar analytical characteristics of components. Partial least squares (PLS) and radial basis function-partial least squares (RBF-PLS) were used to develop the linear and nonlinear relation between spectra and components, respectively. The orthogonal signal correction (OSC) preprocessing method was used to correct unexpected information, such as spectral overlapping and scattering effects. In order to compare the influence of OSC on PLS and RBF-PLS models, the optimal linear (PLS) and nonlinear (RBF-PLS) models based on conventional and OSC preprocessed spectra were established and compared. The obtained results demonstrated that OSC clearly enhanced the performance of both RBF-PLS and PLS calibration models. Also in the case of some nonlinear relation between spectra and component, OSC-RBF-PLS gave satisfactory results than OSC-PLS model which indicated that the OSC was helpful to remove extrinsic deviations from linearity without elimination of nonlinear information related to component. The chemometric models were tested on an external dataset and finally applied to the analysis commercialized injection product of penicillin G salts.

18. An error analysis of least-squares finite element method of velocity-pressure-vorticity formulation for Stokes problem

NASA Technical Reports Server (NTRS)

Chang, Ching L.; Jiang, Bo-Nan

1990-01-01

A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.

19. Modifying constrained least-squares restoration for application to single photon emission computed tomography projection images

SciTech Connect

Penney, B.C.; King, M.A.; Schwinger, R.B.; Baker, S.P.; Doherty, P.W.

1988-05-01

Image restoration methods have been shown to increase the contrast of nuclear medicine images by decreasing the effects of scatter and septal penetration. Image restoration can also reduce the high-frequency noise in the image. This study applies constrained least-squares (CLS) restoration to the projection images of single photon emission computed tomography (SPECT). In a previous study, it was noted that CLS restoration has the potential advantage of automatically adapting to the blurred object. This potential is confirmed using planar images. CLS restoration is then modified to improve its performance when applied to SPECT projection image sets. The modification was necessary because the Poisson noise in low count SPECT images causes considerable variation in the CLS filter. On phantom studies, count-dependent Metz restoration was slightly better than the modified CLS restoration method, according to measures of contrast and noise. However, CLS restoration was generally judged as yielding the best results when applied to clinical studies, apparently because of its ability to adapt to the image being restored.

20. Least-squares Legendre spectral element solutions to sound propagation problems.

PubMed

Lin, W H

2001-02-01

This paper presents a novel algorithm and numerical results of sound wave propagation. The method is based on a least-squares Legendre spectral element approach for spatial discretization and the Crank-Nicolson [Proc. Cambridge Philos. Soc. 43, 50-67 (1947)] and Adams-Bashforth [D. Gottlieb and S. A. Orszag, Numerical Analysis of Spectral Methods: Theory and Applications (CBMS-NSF Monograph, Siam 1977)] schemes for temporal discretization to solve the linearized acoustic field equations for sound propagation. Two types of NASA Computational Aeroacoustics (CAA) Workshop benchmark problems [ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics, edited by J. C. Hardin, J. R. Ristorcelli, and C. K. W. Tam, NASA Conference Publication 3300, 1995a] are considered: a narrow Gaussian sound wave propagating in a one-dimensional space without flows, and the reflection of a two-dimensional acoustic pulse off a rigid wall in the presence of a uniform flow of Mach 0.5 in a semi-infinite space. The first problem was used to examine the numerical dispersion and dissipation characteristics of the proposed algorithm. The second problem was to demonstrate the capability of the algorithm in treating sound propagation in a flow. Comparisons were made of the computed results with analytical results and results obtained by other methods. It is shown that all results computed by the present method are in good agreement with the analytical solutions and results of the first problem agree very well with those predicted by other schemes. PMID:11248952

1. Realizations and performances of least-squares estimation and Kalman filtering by systolic arrays

SciTech Connect

Chen, M.J.

1987-01-01

Fast least-squares (LS) estimation and Kalman-filtering algorithms utilizing systolic-array implementation are studied. Based on a generalized systolic QR algorithm, a modified LS method is proposed and shown to have superior computational and inter-cell connection complexities, and is more practical for systolic-array implementation. After whitening processing, the Kalman filter can be formulated as a SRIF data-processing problem followed by a simple LS operation. This approach simplifies the computational structure, and is more reliable when the system has singular or near singular coefficient matrix. To improve the throughput rate of the systolic Kalman filter, a topology for stripe QR processing is also proposed. By skewing the order of input matrices, a fully pipelined systolic Kalman-filtering operation can be achieved. With the number of processing units of the O(n/sup 2/), the system throughput rate becomes of the O(n). The numerical properties of the systolic LS estimation and the Kalman filtering algorithms under finite word-length effect are studied via analysis and computer simulations, and are compared with that of conventional approaches. Fault tolerance of the LS estimation algorithm is also discussed. It is shown that by using a simple bypass register, reasonable estimation performance is still possible for a transient defective processing unit.

2. Least-squares reverse-time migration with cost-effective computation and memory storage

Liu, Xuejian; Liu, Yike; Huang, Xiaogang; Li, Peng

2016-06-01

Least-squares reverse-time migration (LSRTM), which involves several iterations of reverse-time migration (RTM) and Born modeling procedures, can provide subsurface images with better balanced amplitudes, higher resolution and fewer artifacts than standard migration. However, the same source wavefield is repetitively computed during the Born modeling and RTM procedures of different iterations. We developed a new LSRTM method with modified excitation-amplitude imaging conditions, where the source wavefield for RTM is forward propagated only once while the maximum amplitude and its excitation-time at each grid are stored. Then, the RTM procedure of different iterations only involves: (1) backward propagation of the residual between Born modeled and acquired data, and (2) implementation of the modified excitation-amplitude imaging condition by multiplying the maximum amplitude by the back propagated data residuals only at the grids that satisfy the imaging time at each time-step. For a complex model, 2 or 3 local peak-amplitudes and corresponding traveltimes should be confirmed and stored for all the grids so that multiarrival information of the source wavefield can be utilized for imaging. Numerical experiments on a three-layer and the Marmousi2 model demonstrate that the proposed LSRTM method saves huge computation and memory cost.

3. Texture discrimination of green tea categories based on least squares support vector machine (LSSVM) classifier

Li, Xiaoli; He, Yong; Qiu, Zhengjun; Wu, Di

2008-03-01

This research aimed for development multi-spectral imaging technique for green tea categories discrimination based on texture analysis. Three key wavelengths of 550, 650 and 800 nm were implemented in a common-aperture multi-spectral charged coupled device camera, and images were acquired for 190 unique images in a four different kinds of green tea data set. An image data set consisting of 15 texture features for each image was generated based on texture analysis techniques including grey level co-occurrence method (GLCM) and texture filtering. For optimization the texture features, 5 features that weren't correlated with the category of tea were eliminated. Unsupervised cluster analysis was conducted using the optimized texture features based on principal component analysis. The cluster analysis showed that the four kinds of green tea could be separated in the first two principal components space, however there was overlapping phenomenon among the different kinds of green tea. To enhance the performance of discrimination, least squares support vector machine (LSSVM) classifier was developed based on the optimized texture features. The excellent discrimination performance for sample in prediction set was obtained with 100%, 100%, 75% and 100% for four kinds of green tea respectively. It can be concluded that texture discrimination of green tea categories based on multi-spectral image technology is feasible.

4. Aerosol and trace gas profile retrievals from MAX-DOAS observations using simple least squares methods

Wagner, Thomas; Beirles, Steffen; Shaiganfar, Reza

2010-05-01

Multi-AXis (MAX-) DOAS observations have become a widely used technique for the retrieval of atmospheric profiles of trace gases and aerosols. Since the information content of MAX-DOAS observations is limited, usually optimal estimation techniques are used for profile inversion, and a-priori assumptions are needed. In contrast, in our retrieval we limit the retrieved parameter to few basic profile parameters (e.g. profile shape and integrated column density), which are retrieved without further a-priori assumptions. The retrieval is instead based on simple least squares methods. Despite the simple retrieval scheme, our method has the advantage that it is very robust and stable. It also yields the most important parameters with good accuracy (e.g. total aerosol optical depth, total tropospheric trace gas column density, surface aerosol extinction, surface trace gas mixing ratio). Some of these parameters can even be retrieved for cloudy conditions. We present MAX-DOAS results from two measurement campaigns: The CINDI campaign in Cabauw, The Netherlands, in 2009 and the FORMAT campaign in Milano, Italy, in 2003. Results for aerosols, NO2, and HCHO, are presented and compared to independent measurements.

5. Lossless compression of hyperspectral images using conventional recursive least-squares predictor with adaptive prediction bands

Gao, Fang; Guo, Shuxu

2016-01-01

An efficient lossless compression scheme for hyperspectral images using conventional recursive least-squares (CRLS) predictor with adaptive prediction bands is proposed. The proposed scheme first calculates the preliminary estimates to form the input vector of the CRLS predictor. Then the number of bands used in prediction is adaptively selected by an exhaustive search for the number that minimizes the prediction residual. Finally, after prediction, the prediction residuals are sent to an adaptive arithmetic coder. Experiments on the newer airborne visible/infrared imaging spectrometer (AVIRIS) images in the consultative committee for space data systems (CCSDS) test set show that the proposed scheme yields an average compression performance of 3.29 (bits/pixel), 5.57 (bits/pixel), and 2.44 (bits/pixel) on the 16-bit calibrated images, the 16-bit uncalibrated images, and the 12-bit uncalibrated images, respectively. Experimental results demonstrate that the proposed scheme obtains compression results very close to clustered differential pulse code modulation-with-adaptive-prediction-length, which achieves best lossless compression performance for AVIRIS images in the CCSDS test set, and outperforms other current state-of-the-art schemes with relatively low computation complexity.

6. Weighted partial least squares method to improve calibration precision for spectroscopic noise-limited data

SciTech Connect

Haaland, D.M.; Jones, H.D.T.

1997-09-01

Multivariate calibration methods have been applied extensively to the quantitative analysis of Fourier transform infrared (FT-IR) spectral data. Partial least squares (PLS) methods have become the most widely used multivariate method for quantitative spectroscopic analyses. Most often these methods are limited by model error or the accuracy or precision of the reference methods. However, in some cases, the precision of the quantitative analysis is limited by the noise in the spectroscopic signal. In these situations, the precision of the PLS calibrations and predictions can be improved by the incorporation of weighting in the PLS algorithm. If the spectral noise of the system is known (e.g., in the case of detector-noise-limited cases), then appropriate weighting can be incorporated into the multivariate spectral calibrations and predictions. A weighted PLS (WPLS) algorithm was developed to improve the precision of the analysis in the case of spectral-noise-limited data. This new PLS algorithm was then tested with real and simulated data, and the results compared with the unweighted PLS algorithm. Using near-infrared (NIR) calibration precision when the WPLS algorithm was applied. The best WPLS method improved prediction precision for the analysis of one of the minor components by a factor of nearly 9 relative to the unweighted PLS algorithm.

7. Prediction of olive oil sensory descriptors using instrumental data fusion and partial least squares (PLS) regression.

PubMed

Borràs, Eva; Ferré, Joan; Boqué, Ricard; Mestres, Montserrat; Aceña, Laura; Calvo, Angels; Busto, Olga

2016-08-01

Headspace-Mass Spectrometry (HS-MS), Fourier Transform Mid-Infrared spectroscopy (FT-MIR) and UV-Visible spectrophotometry (UV-vis) instrumental responses have been combined to predict virgin olive oil sensory descriptors. 343 olive oil samples analyzed during four consecutive harvests (2010-2014) were used to build multivariate calibration models using partial least squares (PLS) regression. The reference values of the sensory attributes were provided by expert assessors from an official taste panel. The instrumental data were modeled individually and also using data fusion approaches. The use of fused data with both low- and mid-level of abstraction improved PLS predictions for all the olive oil descriptors. The best PLS models were obtained for two positive attributes (fruity and bitter) and two defective descriptors (fusty and musty), all of them using data fusion of MS and MIR spectral fingerprints. Although good predictions were not obtained for some sensory descriptors, the results are encouraging, specially considering that the legal categorization of virgin olive oils only requires the determination of fruity and defective descriptors. PMID:27216664

8. Pole coordinates data prediction by combination of least squares extrapolation and double autoregressive prediction

Kosek, Wieslaw

2016-04-01

Future Earth Orientation Parameters data are needed to compute real time transformation between the celestial and terrestrial reference frames. This transformation is realized by predictions of x, y pole coordinates data, UT1-UTC data and precesion-nutation extrapolation model. This paper is focused on the pole coordinates data prediction by combination of the least-squares (LS) extrapolation and autoregressive (AR) prediction models (LS+AR). The AR prediction which is applied to the LS extrapolation residuals of pole coordinates data does not able to predict all frequency bands of them and it is mostly tuned to predict subseasonal oscillations. The absolute values of differences between pole coordinates data and their LS+AR predictions increase with prediction length and depend mostly on starting prediction epochs, thus time series of these differences for 2, 4 and 8 weeks in the future were analyzed. Time frequency spectra of these differences for different prediction lengths are very similar showing some power in the frequency band corresponding to the prograde Chandler and annual oscillations, which means that the increase of prediction errors is caused by mismodelling of these oscillations by the LS extrapolation model. Thus, the LS+AR prediction method can be modified by taking into additional AR prediction correction computed from time series of these prediction differences for different prediction lengths. This additional AR prediction is mostly tuned to the seasonal frequency band of pole coordinates data.

9. On the equivalence of generalized least-squares approaches to the evaluation of measurement comparisons

Koo, A.; Clare, J. F.

2012-06-01

Analysis of CIPM international comparisons is increasingly being carried out using a model-based approach that leads naturally to a generalized least-squares (GLS) solution. While this method offers the advantages of being easier to audit and having general applicability to any form of comparison protocol, there is a lack of consensus over aspects of its implementation. Two significant results are presented that show the equivalence of three differing approaches discussed by or applied in comparisons run by Consultative Committees of the CIPM. Both results depend on a mathematical condition equivalent to the requirement that any two artefacts in the comparison are linked through a sequence of measurements of overlapping pairs of artefacts. The first result is that a GLS estimator excluding all sources of error common to all measurements of a participant is equal to the GLS estimator incorporating all sources of error, including those associated with any bias in the standards or procedures of the measuring laboratory. The second result identifies the component of uncertainty in the estimate of bias that arises from possible systematic effects in the participants' measurement standards and procedures. The expression so obtained is a generalization of an expression previously published for a one-artefact comparison with no inter-participant correlations, to one for a comparison comprising any number of repeat measurements of multiple artefacts and allowing for inter-laboratory correlations.

10. Automatic retinal vessel classification using a Least Square-Support Vector Machine in VAMPIRE.

PubMed

Relan, D; MacGillivray, T; Ballerini, L; Trucco, E

2014-01-01

It is important to classify retinal blood vessels into arterioles and venules for computerised analysis of the vasculature and to aid discovery of disease biomarkers. For instance, zone B is the standardised region of a retinal image utilised for the measurement of the arteriole to venule width ratio (AVR), a parameter indicative of microvascular health and systemic disease. We introduce a Least Square-Support Vector Machine (LS-SVM) classifier for the first time (to the best of our knowledge) to label automatically arterioles and venules. We use only 4 image features and consider vessels inside zone B (802 vessels from 70 fundus camera images) and in an extended zone (1,207 vessels, 70 fundus camera images). We achieve an accuracy of 94.88% and 93.96% in zone B and the extended zone, respectively, with a training set of 10 images and a testing set of 60 images. With a smaller training set of only 5 images and the same testing set we achieve an accuracy of 94.16% and 93.95%, respectively. This experiment was repeated five times by randomly choosing 10 and 5 images for the training set. Mean classification accuracy are close to the above mentioned result. We conclude that the performance of our system is very promising and outperforms most recently reported systems. Our approach requires smaller training data sets compared to others but still results in a similar or higher classification rate. PMID:25569917

11. Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization

PubMed Central

Tarrío, Paula; Bernardos, Ana M.; Casar, José R.

2011-01-01

The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092

12. Distributed weighted least-squares estimation with fast convergence for large-scale systems☆

PubMed Central

Marelli, Damián Edgardo; Fu, Minyue

2015-01-01

In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976

13. Entropy and generalized least square methods in assessment of the regional value of streamgages

USGS Publications Warehouse

Markus, M.; Vernon, Knapp H.; Tasker, Gary D.

2003-01-01

The Illinois State Water Survey performed a study to assess the streamgaging network in the State of Illinois. One of the important aspects of the study was to assess the regional value of each station through an assessment of the information transfer among gaging records for low, average, and high flow conditions. This analysis was performed for the main hydrologic regions in the State, and the stations were initially evaluated using a new approach based on entropy analysis. To determine the regional value of each station within a region, several information parameters, including total net information, were defined based on entropy. Stations were ranked based on the total net information. For comparison, the regional value of the same stations was assessed using the generalized least square regression (GLS) method, developed by the US Geological Survey. Finally, a hybrid combination of GLS and entropy was created by including a function of the negative net information as a penalty function in the GLS. The weights of the combined model were determined to maximize the average correlation with the results of GLS and entropy. The entropy and GLS methods were evaluated using the high-flow data from southern Illinois stations. The combined method was compared with the entropy and GLS approaches using the high-flow data from eastern Illinois stations. ?? 2003 Elsevier B.V. All rights reserved.

14. An efficient recursive least square-based condition monitoring approach for a rail vehicle suspension system

Liu, X. Y.; Alfi, S.; Bruni, S.

2016-06-01

A model-based condition monitoring strategy for the railway vehicle suspension is proposed in this paper. This approach is based on recursive least square (RLS) algorithm focusing on the deterministic 'input-output' model. RLS has Kalman filtering feature and is able to identify the unknown parameters from a noisy dynamic system by memorising the correlation properties of variables. The identification of suspension parameter is achieved by machine learning of the relationship between excitation and response in a vehicle dynamic system. A fault detection method for the vertical primary suspension is illustrated as an instance of this condition monitoring scheme. Simulation results from the rail vehicle dynamics software 'ADTreS' are utilised as 'virtual measurements' considering a trailer car of Italian ETR500 high-speed train. The field test data from an E464 locomotive are also employed to validate the feasibility of this strategy for the real application. Results of the parameter identification performed indicate that estimated suspension parameters are consistent or approximate with the reference values. These results provide the supporting evidence that this fault diagnosis technique is capable of paving the way for the future vehicle condition monitoring system.

15. Estimation of active pharmaceutical ingredients content using locally weighted partial least squares and statistical wavelength selection.

PubMed

Kim, Sanghong; Kano, Manabu; Nakagawa, Hiroshi; Hasebe, Shinji

2011-12-15

Development of quality estimation models using near infrared spectroscopy (NIRS) and multivariate analysis has been accelerated as a process analytical technology (PAT) tool in the pharmaceutical industry. Although linear regression methods such as partial least squares (PLS) are widely used, they cannot always achieve high estimation accuracy because physical and chemical properties of a measuring object have a complex effect on NIR spectra. In this research, locally weighted PLS (LW-PLS) which utilizes a newly defined similarity between samples is proposed to estimate active pharmaceutical ingredient (API) content in granules for tableting. In addition, a statistical wavelength selection method which quantifies the effect of API content and other factors on NIR spectra is proposed. LW-PLS and the proposed wavelength selection method were applied to real process data provided by Daiichi Sankyo Co., Ltd., and the estimation accuracy was improved by 38.6% in root mean square error of prediction (RMSEP) compared to the conventional PLS using wavelengths selected on the basis of variable importance on the projection (VIP). The results clearly show that the proposed calibration modeling technique is useful for API content estimation and is superior to the conventional one. PMID:22001843

16. A bifurcation identifier for IV-OCT using orthogonal least squares and supervised machine learning.

PubMed

Macedo, Maysa M G; Guimarães, Welingson V N; Galon, Micheli Z; Takimura, Celso K; Lemos, Pedro A; Gutierrez, Marco Antonio

2015-12-01

Intravascular optical coherence tomography (IV-OCT) is an in-vivo imaging modality based on the intravascular introduction of a catheter which provides a view of the inner wall of blood vessels with a spatial resolution of 10-20 μm. Recent studies in IV-OCT have demonstrated the importance of the bifurcation regions. Therefore, the development of an automated tool to classify hundreds of coronary OCT frames as bifurcation or nonbifurcation can be an important step to improve automated methods for atherosclerotic plaques quantification, stent analysis and co-registration between different modalities. This paper describes a fully automated method to identify IV-OCT frames in bifurcation regions. The method is divided into lumen detection; feature extraction; and classification, providing a lumen area quantification, geometrical features of the cross-sectional lumen and labeled slices. This classification method is a combination of supervised machine learning algorithms and feature selection using orthogonal least squares methods. Training and tests were performed in sets with a maximum of 1460 human coronary OCT frames. The lumen segmentation achieved a mean difference of lumen area of 0.11 mm(2) compared with manual segmentation, and the AdaBoost classifier presented the best result reaching a F-measure score of 97.5% using 104 features. PMID:26433615

17. Prediction of protein-protein interactions based on protein-protein correlation using least squares regression.

PubMed

Huang, De-Shuang; Zhang, Lei; Han, Kyungsook; Deng, Suping; Yang, Kai; Zhang, Hongbo

2014-01-01

In order to transform protein sequences into the feature vectors, several works have been done, such as computing auto covariance (AC), conjoint triad (CT), local descriptor (LD), moran autocorrelation (MA), normalized moreaubroto autocorrelation (NMB) and so on. In this paper, we shall adopt these transformation methods to encode the proteins, respectively, where AC, CT, LD, MA and NMB are all represented by '+' in a unified manner. A new method, i.e. the combination of least squares regression with '+' (abbreviated as LSR(+)), will be introduced for encoding a protein-protein correlation-based feature representation and an interacting protein pair. Thus there are totally five different combinations for LSR(+), i.e. LSRAC, LSRCT, LSRLD, LSRMA and LSRNMB. As a result, we combined a support vector machine (SVM) approach with LSR(+) to predict protein-protein interactions (PPI) and PPI networks. The proposed method has been applied on four datasets, i.e. Saaccharomyces cerevisiae, Escherichia coli, Homo sapiens and Caenorhabditis elegans. The experimental results demonstrate that all LSR(+) methods outperform many existing representative algorithms. Therefore, LSR(+) is a powerful tool to characterize the protein-protein correlations and to infer PPI, whilst keeping high performance on prediction of PPI networks. PMID:25059329

18. Passive shimming of a superconducting magnet using the L1-norm regularized least square algorithm

Kong, Xia; Zhu, Minhua; Xia, Ling; Wang, Qiuliang; Li, Yi; Zhu, Xuchen; Liu, Feng; Crozier, Stuart

2016-02-01

The uniformity of the static magnetic field B0 is of prime importance for an MRI system. The passive shimming technique is usually applied to improve the uniformity of the static field by optimizing the layout of a series of steel shims. The steel pieces are fixed in the drawers in the inner bore of the superconducting magnet, and produce a magnetizing field in the imaging region to compensate for the inhomogeneity of the B0 field. In practice, the total mass of steel used for shimming should be minimized, in addition to the field uniformity requirement. This is because the presence of steel shims may introduce a thermal stability problem. The passive shimming procedure is typically realized using the linear programming (LP) method. The LP approach however, is generally slow and also has difficulty balancing the field quality and the total amount of steel for shimming. In this paper, we have developed a new algorithm that is better able to balance the dual constraints of field uniformity and the total mass of the shims. The least square method is used to minimize the magnetic field inhomogeneity over the imaging surface with the total mass of steel being controlled by an L1-norm based constraint. The proposed algorithm has been tested with practical field data, and the results show that, with similar computational cost and mass of shim material, the new algorithm achieves superior field uniformity (43% better for the test case) compared with the conventional linear programming approach.

19. Degree of deacetylation of chitosan by infrared spectroscopy and partial least squares.

PubMed

Dimzon, Ian Ken D; Knepper, Thomas P

2015-01-01

The determination of the degree of deacetylation of highly deacetylated chitosan by infrared (IR) spectroscopy was significantly improved with the use of partial least squares (PLS). The IR spectral region from 1500 to 1800 cm(-1) was taken as the dataset. Different PLS models resulting from various data pre-treatments were evaluated and compared. The PLS model that gave excellent internal and external validation performance came from the data that were corrected for the baseline and that was normalized relative to the maximum corrected absorbance. Analysis of the PLS loadings plot showed that the important variables in the spectral region came from the absorption maxima related to the amide bands at 1660 and 1550 cm(-1) and amine band at 1600 cm(-1). IR-PLS results were comparable to the results obtained by potentiometric titration. IR-PLS results were found to be more precise and rugged compared to the usual IR absorbance ratio method. This is consistent with the fact that IR spectral resolution is not really high and that the absorption at a single wavelength is influenced by other factors like hydrogen bonding and the presence of water. PMID:25316417

20. A technique to improve the accuracy of Earth orientation prediction algorithms based on least squares extrapolation

Guo, J. Y.; Li, Y. B.; Dai, C. L.; Shum, C. K.

2013-10-01

We present a technique to improve the least squares (LS) extrapolation of Earth orientation parameters (EOPs), consisting of fixing the last observed data point on the LS extrapolation curve, which customarily includes a polynomial and a few sinusoids. For the polar motion (PM), a more sophisticated two steps approach has been developed, which consists of estimating the amplitude of the more stable one of the annual (AW) and Chandler (CW) wobbles using data of longer time span, and then estimating the other parameters using a shorter time span. The technique is studied using hindcast experiments, and justified using year-by-year statistics of 8 years. In order to compare with the official predictions of the International Earth Rotation and Reference Systems Service (IERS) performed at the U.S. Navy Observatory (USNO), we have enforced short-term predictions by applying the ARIMA method to the residuals computed by subtracting the LS extrapolation curve from the observation data. The same as at USNO, we have also used atmospheric excitation function (AEF) to further improve predictions of UT1-UTC. As results, our short-term predictions are comparable to the USNO predictions, and our long-term predictions are marginally better, although not for every year. In addition, we have tested the use of AEF and oceanic excitation function (OEF) in PM prediction. We find that use of forecasts of AEF alone does not lead to any apparent improvement or worsening, while use of forecasts of AEF + OEF does lead to apparent improvement.

1. IUPAC-consistent approach to the limit of detection in partial least-squares calibration.

PubMed

Allegrini, Franco; Olivieri, Alejandro C

2014-08-01

There is currently no well-defined procedure for providing the limit of detection (LOD) in multivariate calibration. Defining an estimator for the LOD in this scenario has shown to be more complex than intuitively extending the traditional univariate definition. For these reasons, although many attempts have been made to arrive at a reasonable convention, additional effort is required to achieve full agreement between the univariate and multivariate LOD definitions. In this work, a novel approach is presented to estimate the LOD in partial least-squares (PLS) calibration. Instead of a single LOD value, an interval of LODs is provided, which depends on the variation of the background composition in the calibration space. This is in contrast with previously proposed univariate extensions of the LOD concept. With the present definition, the LOD interval becomes a parameter characterizing the overall PLS calibration model, and not each test sample in particular, as has been proposed in the past. The new approach takes into account IUPAC official recommendations, and also the latest developments in error-in-variables theory for PLS calibration. Both simulated and real analytical systems have been studied for illustrating the properties of the new LOD concept. PMID:25008998

2. Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches

USGS Publications Warehouse

Brooks, Wesley R.; Fienen, Michael N.; Corsi, Steven R.

2013-01-01

At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.

3. [UV spectroscopy coupled with partial least squares to determine the enantiomeric composition in chiral drugs].

PubMed

Li, Qian-qian; Wu, Li-jun; Liu, Wei; Cao, Jin-li; Duan, Jia; Huang, Yue; Min, Shun-geng

2012-02-01

In the present study, sucrose was used as a chiral selector to detect the molar fraction of R-metalaxyl and S-ibuprofen due to the UV spectral difference caused by the interaction of the R- and S-isomer with sucrose. The quantitative model of the molar fraction of R-metalaxyl was established by partial least squares (PLS) regression and the robustness of the models was evaluated by 6 independent validation samples. The determination coefficient R2 and the standard error of calibration set (SEC) was 99.98% and 0.003 respectively. The correlation coefficient of estimated value and specified value, the standard error and the relative standard deviation (RSD) of the independent validation samples was 0.999 8, 0.000 4 and 0.054% respectively. The quantitative models of the molar fraction of S-ibuprofen were established by PLS and the robustness of models was evaluated. The determination coefficient R2 and the standard error of calibration set (SEC) was 99.82% and 0.007 respectively. The correlation coefficient of estimated value and specified value of the independent validation samples was 0.998 1. The standard error of prediction (SEP) was 0.002 and the relative standard deviation (RSD) was 0.2%. The result demonstrates that sucrose is an ideal chiral selector for building a stable regression model to determine the enantiomeric composition. PMID:22512198

4. Large-scale computation of incompressible viscous flow by least-squares finite element method

NASA Technical Reports Server (NTRS)

Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.

1993-01-01

The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.

5. Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression

SciTech Connect

Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards; New, Joshua Ryan

2013-01-01

Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-fold cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.

6. Eddy current characterization of small cracks using least square support vector machine

Chelabi, M.; Hacib, T.; Le Bihan, Y.; Ikhlef, N.; Boughedda, H.; Mekideche, M. R.

2016-04-01

Eddy current (EC) sensors are used for non-destructive testing since they are able to probe conductive materials. Despite being a conventional technique for defect detection and localization, the main weakness of this technique is that defect characterization, of the exact determination of the shape and dimension, is still a question to be answered. In this work, we demonstrate the capability of small crack sizing using signals acquired from an EC sensor. We report our effort to develop a systematic approach to estimate the size of rectangular and thin defects (length and depth) in a conductive plate. The achieved approach by the novel combination of a finite element method (FEM) with a statistical learning method is called least square support vector machines (LS-SVM). First, we use the FEM to design the forward problem. Next, an algorithm is used to find an adaptive database. Finally, the LS-SVM is used to solve the inverse problems, creating polynomial functions able to approximate the correlation between the crack dimension and the signal picked up from the EC sensor. Several methods are used to find the parameters of the LS-SVM. In this study, the particle swarm optimization (PSO) and genetic algorithm (GA) are proposed for tuning the LS-SVM. The results of the design and the inversions were compared to both simulated and experimental data, with accuracy experimentally verified. These suggested results prove the applicability of the presented approach.

7. Multidimensional model of apathy in older adults using partial least squares-path modeling.

PubMed

Raffard, Stéphane; Bortolon, Catherine; Burca, Marianna; Gely-Nargeot, Marie-Christine; Capdevielle, Delphine

2016-06-01

Apathy defined as a mental state characterized by a lack of goal-directed behavior is prevalent and associated with poor functioning in older adults. The main objective of this study was to identify factors contributing to the distinct dimensions of apathy (cognitive, emotional, and behavioral) in older adults without dementia. One hundred and fifty participants (mean age, 80.42) completed self-rated questionnaires assessing apathy, emotional distress, anticipatory pleasure, motivational systems, physical functioning, quality of life, and cognitive functioning. Data were analyzed using partial least squares variance-based structural equation modeling in order to examine factors contributing to the three different dimensions of apathy in our sample. Overall, the different facets of apathy were associated with cognitive functioning, anticipatory pleasure, sensitivity to reward, and physical functioning, but the contribution of these different factors to the three dimensions of apathy differed significantly. More specifically, the impact of anticipatory pleasure and physical functioning was stronger for the cognitive than for emotional apathy. Conversely, the impact of sensibility to reward, although small, was slightly stronger on emotional apathy. Regarding behavioral apathy, again we found similar latent variables except for the cognitive functioning whose impact was not statistically significant. Our results highlight the need to take into account various mechanisms involved in the different facets of apathy in older adults without dementia, including not only cognitive factors but also motivational variables and aspects related to physical disability. Clinical implications are discussed. PMID:27153818

8. Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization.

PubMed

Lu, Canyi; Lin, Zhouchen; Yan, Shuicheng

2015-02-01

This paper presents a general framework for solving the low-rank and/or sparse matrix minimization problems, which may involve multiple nonsmooth terms. The iteratively reweighted least squares (IRLSs) method is a fast solver, which smooths the objective function and minimizes it by alternately updating the variables and their weights. However, the traditional IRLS can only solve a sparse only or low rank only minimization problem with squared loss or an affine constraint. This paper generalizes IRLS to solve joint/mixed low-rank and sparse minimization problems, which are essential formulations for many tasks. As a concrete example, we solve the Schatten-p norm and l2,q-norm regularized low-rank representation problem by IRLS, and theoretically prove that the derived solution is a stationary point (globally optimal if p,q ≥ 1). Our convergence proof of IRLS is more general than previous one that depends on the special properties of the Schatten-p norm and l2,q-norm. Extensive experiments on both synthetic and real data sets demonstrate that our IRLS is much more efficient. PMID:25531948

9. [Quantitative analysis of alloy steel based on laser induced breakdown spectroscopy with partial least squares method].

PubMed

Cong, Zhi-Bo; Sun, Lan-Xiang; Xin, Yong; Li, Yang; Qi, Li-Feng; Yang, Zhi-Jia

2014-02-01

In the present paper both the partial least squares (PLS) method and the calibration curve (CC) method are used to quantitatively analyze the laser induced breakdown spectroscopy data obtained from the standard alloy steel samples. Both the major and trace elements were quantitatively analyzed. By comparing the results of two different calibration methods some useful results were obtained: for major elements, the PLS method is better than the CC method in quantitative analysis; more importantly, for the trace elements, the CC method can not give the quantitative results due to the extremely weak characteristic spectral lines, but the PLS method still has a good ability of quantitative analysis. And the regression coefficient of PLS method is compared with the original spectral data with background interference to explain the advantage of the PLS method in the LIBS quantitative analysis. Results proved that the PLS method used in laser induced breakdown spectroscopy is suitable for quantitative analysis of trace elements such as C in the metallurgical industry. PMID:24822436

10. First-order system least-squares for the Helmholtz equation

SciTech Connect

Lee, B.; Manteuffel, T.; McCormick, S.; Ruge, J.

1996-12-31

We apply the FOSLS methodology to the exterior Helmholtz equation {Delta}p + k{sup 2}p = 0. Several least-squares functionals, some of which include both H{sup -1}({Omega}) and L{sup 2}({Omega}) terms, are examined. We show that in a special subspace of [H(div; {Omega}) {intersection} H(curl; {Omega})] x H{sup 1}({Omega}), each of these functionals are equivalent independent of k to a scaled H{sup 1}({Omega}) norm of p and u = {del}p. This special subspace does not include the oscillatory near-nullspace components ce{sup ik}({sup {alpha}x+{beta}y)}, where c is a complex vector and where {alpha}{sub 2} + {beta}{sup 2} = 1. These components are eliminated by applying a non-standard coarsening scheme. We achieve this scheme by introducing {open_quotes}ray{close_quotes} basis functions which depend on the parameter pair ({alpha}, {beta}), and which approximate ce{sup ik}({sup {alpha}x+{beta}y)} well on the coarser levels where bilinears cannot. We use several pairs of these parameters on each of these coarser levels so that several coarse grid problems are spun off from the finer levels. Some extensions of this theory to the transverse electric wave solution for Maxwells equations will also be presented.

11. Least Squares Magnetic-Field Optimization for Portable Nuclear Magnetic Resonance Magnet Design

SciTech Connect

Paulsen, Jeffrey L; Franck, John; Demas, Vasiliki; Bouchard, Louis-S.

2008-03-27

Single-sided and mobile nuclear magnetic resonance (NMR) sensors have the advantages of portability, low cost, and low power consumption compared to conventional high-field NMR and magnetic resonance imaging (MRI) systems. We present fast, flexible, and easy-to-implement target field algorithms for mobile NMR and MRI magnet design. The optimization finds a global optimum ina cost function that minimizes the error in the target magnetic field in the sense of least squares. When the technique is tested on a ring array of permanent-magnet elements, the solution matches the classical dipole Halbach solution. For a single-sided handheld NMR sensor, the algorithm yields a 640 G field homogeneous to 16 100 ppm across a 1.9 cc volume located 1.5 cm above the top of the magnets and homogeneous to 32 200 ppm over a 7.6 cc volume. This regime is adequate for MRI applications. We demonstrate that the homogeneous region can be continuously moved away from the sensor by rotating magnet rod elements, opening the way for NMR sensors with adjustable"sensitive volumes."

12. Radioisotopic neutron transmission spectrometry: Quantitative analysis by using partial least-squares method.

PubMed

Kim, Jong-Yun; Choi, Yong Suk; Park, Yong Joon; Jung, Sung-Hee

2009-01-01

Neutron spectrometry, based on the scattering of high energy fast neutrons from a radioisotope and slowing-down by the light hydrogen atoms, is a useful technique for non-destructive, quantitative measurement of hydrogen content because it has a large measuring volume, and is not affected by temperature, pressure, pH value and color. The most common choice for radioisotope neutron source is (252)Cf or (241)Am-Be. In this study, (252)Cf with a neutron flux of 6.3x10(6)n/s has been used as an attractive neutron source because of its high flux neutron and weak radioactivity. Pulse-height neutron spectra have been obtained by using in-house built radioisotopic neutron spectrometric system equipped with (3)He detector and multi-channel analyzer, including a neutron shield. As a preliminary study, polyethylene block (density of approximately 0.947g/cc and area of 40cmx25cm) was used for the determination of hydrogen content by using multivariate calibration models, depending on the thickness of the block. Compared with the results obtained from a simple linear calibration model, partial least-squares regression (PLSR) method offered a better performance in a quantitative data analysis. It also revealed that the PLSR method in a neutron spectrometric system can be promising in the real-time, online monitoring of the powder process to determine the content of any type of molecules containing hydrogen nuclei. PMID:19285419

13. A Least Square Method Based Model for Identifying Protein Complexes in Protein-Protein Interaction Network

PubMed Central

Dai, Qiguo; Guo, Maozu; Guo, Yingjie; Liu, Xiaoyan; Liu, Yang; Teng, Zhixia

2014-01-01

Protein complex formed by a group of physical interacting proteins plays a crucial role in cell activities. Great effort has been made to computationally identify protein complexes from protein-protein interaction (PPI) network. However, the accuracy of the prediction is still far from being satisfactory, because the topological structures of protein complexes in the PPI network are too complicated. This paper proposes a novel optimization framework to detect complexes from PPI network, named PLSMC. The method is on the basis of the fact that if two proteins are in a common complex, they are likely to be interacting. PLSMC employs this relation to determine complexes by a penalized least squares method. PLSMC is applied to several public yeast PPI networks, and compared with several state-of-the-art methods. The results indicate that PLSMC outperforms other methods. In particular, complexes predicted by PLSMC can match known complexes with a higher accuracy than other methods. Furthermore, the predicted complexes have high functional homogeneity. PMID:25405206

14. Fast integer least squares estimation methods: applications-oriented review and improvement

Xu, Peiliang

2013-04-01

The integer least squares (ILS) problem, also known as the weighted closest point problem, is highly interdisciplinary, but no algorithms can find its global optimal integer solution in polynomial time. In this talk, we will review fast algorithms for estimation of integer parameters. First, we will outline two suboptimal integer solutions, which can be important either in real time communication systems or to solve high dimensional GPS integer ambiguity unknowns. We then focus on the most efficient algorithms to search for the exact integer solution, which is shown to be much faster than LAMBDA in the sense that the ratio of integer candidates to be checked by efficient algorithms to those by LAMBDA can be theoretically expressed by rm, where r < 1 and m is the number of integer unknowns. Finally, we further improve the searching efficiency of the most powerful combined algorithms by implementing two sorting strategies, which can either be used for finding the exact integer solution or for constructing a suboptimal integer solution. A test example clearly demonstrates that the improved methods can perform significantly better than the most powerful combined algorithm to simultaneously find the optimal and second optimal integer solutions. More mathematical and algorithmic details of this talk can be found in Xu (2001, J Geod, 75, 408-423); Xu (2006, IEEE Trans Information Theory, 52, 3122-3138); Xu (2012, J Geod, 86, 35-52) and Xu et al. (2012, Survey Review, 44, 59-71).

15. Relative Scale Estimation and 3D Registration of Multi-Modal Geometry Using Growing Least Squares.

PubMed

Mellado, Nicolas; Dellepiane, Matteo; Scopigno, Roberto

2016-09-01

The advent of low cost scanning devices and the improvement of multi-view stereo techniques have made the acquisition of 3D geometry ubiquitous. Data gathered from different devices, however, result in large variations in detail, scale, and coverage. Registration of such data is essential before visualizing, comparing and archiving them. However, state-of-the-art methods for geometry registration cannot be directly applied due to intrinsic differences between the models, e.g., sampling, scale, noise. In this paper we present a method for the automatic registration of multi-modal geometric data, i.e., acquired by devices with different properties (e.g., resolution, noise, data scaling). The method uses a descriptor based on Growing Least Squares, and is robust to noise, variation in sampling density, details, and enables scale-invariant matching. It allows not only the measurement of the similarity between the geometry surrounding two points, but also the estimation of their relative scale. As it is computed locally, it can be used to analyze large point clouds composed of millions of points. We implemented our approach in two registration procedures (assisted and automatic) and applied them successfully on a number of synthetic and real cases. We show that using our method, multi-modal models can be automatically registered, regardless of their differences in noise, detail, scale, and unknown relative coverage. PMID:26672045

16. Least squares collocation applied to local gravimetric solutions from satellite gravity gradiometry data

NASA Technical Reports Server (NTRS)

Robbins, J. W.

1985-01-01

An autonomous spaceborne gravity gradiometer mission is being considered as a post Geopotential Research Mission project. The introduction of satellite diometry data to geodesy is expected to improve solid earth gravity models. The possibility of utilizing gradiometer data for the determination of pertinent gravimetric quantities on a local basis is explored. The analytical technique of least squares collocation is investigated for its usefulness in local solutions of this type. It is assumed, in the error analysis, that the vertical gravity gradient component of the gradient tensor is used as the raw data signal from which the corresponding reference gradients are removed to create the centered observations required in the collocation solution. The reference gradients are computed from a high degree and order geopotential model. The solution can be made in terms of mean or point gravity anomalies, height anomalies, or other useful gravimetric quantities depending on the choice of covariance types. Selected for this study were 30 x 30 foot mean gravity and height anomalies. Existing software and new software are utilized to implement the collocation technique. It was determined that satellite gradiometry data at an altitude of 200 km can be used successfully for the determination of 30 x 30 foot mean gravity anomalies to an accuracy of 9.2 mgal from this algorithm. It is shown that the resulting accuracy estimates are sensitive to gravity model coefficient uncertainties, data reduction assumptions and satellite mission parameters.

17. Empirical mode decomposition coupled with least square support vector machine for river flow forecasting

Ismail, Shuhaida; Shabri, Ani; Abadan, Siti Sarah

2015-02-01

This paper aims to investigate the ability of Empirical Mode Decompositio n (EMD) coupled with Least Square Support Vector Machine (LSSVM) model in order to improve the accuracy of river flow forecasting. To assess the effectiveness of this model, Bernam monthly river flow data, has served as the case study. The proposed model was set at three important stages which are decomposition, component identification and forecasting stages respectively. The first stage is known as decomposition stage where EMD were employed for decomposing the dataset into several numbers of Intrinsic Mode Functions (IMF) and a residue. During on second stage, the meaningful signals are identified using a statistical measure and the new dataset are obtained in this stage. The final stage applied LSSVM as a forecasting tool to perform the river flow forecasting. The performance of the EMD coupled with LSSVM model is compared with the single LSSVM models using various statistics measures of Mean Absolute Error (MAE), Root Mean Square Error (RMSE), correlation-coefficient (R) and Correlation of Efficiency (CE). The comparison results reveal the proposed model of EMD coupled with LSSVM model serves as a useful tool and a promising new method for river flow forecasting.

18. Retrieve the evaporation duct height by least-squares support vector machine algorithm

Douvenot, Remi; Fabbro, Vincent; Bourlier, Christophe; Saillard, Joseph; Fuchs, Hans-Hellmuth; Essen, Helmut; Foerster, Joerg

2009-01-01

The detection and tracking of naval targets, including low Radar Cross Section (RCS) objects like inflatable boats or sea skimming missiles requires a thorough knowledge of the propagation properties of the maritime boundary layer. Models are in existence, which allow a prediction of the propagation factor using the parabolic equation algorithm. As a necessary input, the refractive index has to be known. This index, however, is strongly influenced by the actual atmospheric conditions, characterized mainly by temperature, humidity and air pressure. An approach is initiated to retrieve the vertical profile of the refractive index from the propagation factor measured on an onboard target. The method is based on the LS-SVM (Least-Squares Support Vector Machines) theory. The inversion method is here used to determine refractive index from data measured during the VAMPIRA campaign (Validation Measurement for Propagation in the Infrared and RAdar) conducted as a multinational approach over a transmission path across the Baltic Sea. As a propagation factor has been measured on two reference reflectors mounted onboard a naval vessel at different heights, the inversion method can be tested on both heights. The paper describes the experimental campaign and validates the LS-SVM inversion method for refractivity from propagation factor on simple measured data.

19. LP Norm SAR Tomography by Iteratively Rewighted Least Square: First Results on Hong Kong

Mancon, Simone; Tebaldini, Stefano; Monti Guarnieri, Andre

2014-11-01

Synthetic aperture radar tomography (TomoSAR) is the natural extension to 3-D of conventional 2-D Synthetic Aperture Radar (SAR) imaging. In this work, we focus on urban scenarios where targets of interest are point-like and radiometrically strong, i.e. the reflectivity profile in elevation is sparse. Accordingly, the method for TomoSAR imaging suggested in this work is based on Compressive Sensing (CS) theory. CS problems are typically solved by looking for the minimal solution in some Lp norm, where 0≤ p ≤ 1. The solution that minimizes an arbitrary Lp norm can be obtained using the Iteratively Reweighted Least Square (IRLS) algorithm. Based on an experimental comparison among different choices for p, the conclusion drawn is that the usual choice p = 1 is the best trade-off between resolution and robustness to noise. Results from real data will be discussed by reporting a TomoSAR reconstruction of an area in Hong Kong (China), acquired by COSMO-SkyMed.

20. Gene Function Prediction from Functional Association Networks Using Kernel Partial Least Squares Regression

PubMed Central

Lehtinen, Sonja; Lees, Jon; Bähler, Jürg; Shawe-Taylor, John; Orengo, Christine

2015-01-01

With the growing availability of large-scale biological datasets, automated methods of extracting functionally meaningful information from this data are becoming increasingly important. Data relating to functional association between genes or proteins, such as co-expression or functional association, is often represented in terms of gene or protein networks. Several methods of predicting gene function from these networks have been proposed. However, evaluating the relative performance of these algorithms may not be trivial: concerns have been raised over biases in different benchmarking methods and datasets, particularly relating to non-independence of functional association data and test data. In this paper we propose a new network-based gene function prediction algorithm using a commute-time kernel and partial least squares regression (Compass). We compare Compass to GeneMANIA, a leading network-based prediction algorithm, using a number of different benchmarks, and find that Compass outperforms GeneMANIA on these benchmarks. We also explicitly explore problems associated with the non-independence of functional association data and test data. We find that a benchmark based on the Gene Ontology database, which, directly or indirectly, incorporates information from other databases, may considerably overestimate the performance of algorithms exploiting functional association data for prediction. PMID:26288239

1. Multicoil Dixon chemical species separation with an iterative least-squares estimation method.

PubMed

Reeder, Scott B; Wen, Zhifei; Yu, Huanzhou; Pineda, Angel R; Gold, Garry E; Markl, Michael; Pelc, Norbert J

2004-01-01

This work describes a new approach to multipoint Dixon fat-water separation that is amenable to pulse sequences that require short echo time (TE) increments, such as steady-state free precession (SSFP) and fast spin-echo (FSE) imaging. Using an iterative linear least-squares method that decomposes water and fat images from source images acquired at short TE increments, images with a high signal-to-noise ratio (SNR) and uniform separation of water and fat are obtained. This algorithm extends to multicoil reconstruction with minimal additional complexity. Examples of single- and multicoil fat-water decompositions are shown from source images acquired at both 1.5T and 3.0T. Examples in the knee, ankle, pelvis, abdomen, and heart are shown, using FSE, SSFP, and spoiled gradient-echo (SPGR) pulse sequences. The algorithm was applied to systems with multiple chemical species, and an example of water-fat-silicone separation is shown. An analysis of the noise performance of this method is described, and methods to improve noise performance through multicoil acquisition and field map smoothing are discussed. PMID:14705043

2. An intelligent diagnosis method for rotating machinery using least squares mapping and a fuzzy neural network.

PubMed

Li, Ke; Chen, Peng; Wang, Shiming

2012-01-01

This study proposes a new condition diagnosis method for rotating machinery developed using least squares mapping (LSM) and a fuzzy neural network. The non-dimensional symptom parameters (NSPs) in the time domain are defined to reflect the features of the vibration signals measured in each state. A sensitive evaluation method for selecting good symptom parameters using detection index (DI) is also proposed for detecting and distinguishing faults in rotating machinery. In order to raise the diagnosis sensitivity of the symptom parameters the synthetic symptom parameters (SSPs) are obtained by LSM. Moreover, possibility theory and the Dempster & Shafer theory (DST) are used to process the ambiguous relationship between symptoms and fault types. Finally, a sequential diagnosis method, using sequential inference and a fuzzy neural network realized by the partially-linearized neural network (PLNN), is also proposed, by which the conditions of rotating machinery can be identified sequentially. Practical examples of fault diagnosis for a roller bearing are shown to verify that the method is effective. PMID:22778622

3. TDRSS-user orbit determination using batch least-squares and sequential methods

NASA Technical Reports Server (NTRS)

Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, Mina V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

1993-01-01

The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), and operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the January 17-23, 1991, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were less than 40 meters after the filter had reached steady state.

4. Evaluation of Landsat-4 orbit determination accuracy using batch least-squares and sequential methods

NASA Technical Reports Server (NTRS)

Oza, D. H.; Jones, T. L.; Feiertag, R.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

1993-01-01

The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite (TDRS) System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the May 18-24, 1992, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. During this period, there were two separate orbit-adjust maneuvers on one of the TDRSS spacecraft (TDRS-East) and one small orbit-adjust maneuver for Landsat-4. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were generally less than 30 meters after the filter had reached steady state.

5. Partial least squares correlation of multivariate cognitive abilities and local brain structure in children and adolescents.

PubMed

Ziegler, G; Dahnke, R; Winkler, A D; Gaser, C

2013-11-15

Intelligent behavior is not a one-dimensional phenomenon. Individual differences in human cognitive abilities might be therefore described by a 'cognitive manifold' of intercorrelated tests from partially independent domains of general intelligence and executive functions. However, the relationship between these individual differences and brain morphology is not yet fully understood. Here we take a multivariate approach to analyzing covariations across individuals in two feature spaces: the low-dimensional space of cognitive ability subtests and the high-dimensional space of local gray matter volume obtained from voxel-based morphometry. By exploiting a partial least squares correlation framework in a large sample of 286 healthy children and adolescents, we identify directions of maximum covariance between both spaces in terms of latent variable modeling. We obtain an orthogonal set of latent variables representing commonalities in the brain-behavior system, which emphasize specific neuronal networks involved in cognitive ability differences. We further explore the early lifespan maturation of the covariance between cognitive abilities and local gray matter volume. The dominant latent variable revealed positive weights across widespread gray matter regions (in the brain domain) and the strongest weights for parents' ratings of children's executive function (in the cognitive domain). The obtained latent variables for brain and cognitive abilities exhibited moderate correlations of 0.46-0.6. Moreover, the multivariate modeling revealed indications for a heterochronic formation of the association as a process of brain maturation across different age groups. PMID:23727321

6. Partial Least Square Discriminant Analysis Discovered a Dietary Pattern Inversely Associated with Nasopharyngeal Carcinoma Risk

PubMed Central

Lo, Yen-Li; Pan, Wen-Harn; Hsu, Wan-Lun; Chien, Yin-Chu; Chen, Jen-Yang; Hsu, Mow-Ming; Lou, Pei-Jen; Chen, I-How; Hildesheim, Allan; Chen, Chien-Jen

2016-01-01

Evidence on the association between dietary component, dietary pattern and nasopharyngeal carcinoma (NPC) is scarce. A major challenge is the high degree of correlation among dietary constituents. We aimed to identify dietary pattern associated with NPC and to illustrate the dose-response relationship between the identified dietary pattern scores and the risk of NPC. Taking advantage of a matched NPC case–control study, data from a total of 319 incident cases and 319 matched controls were analyzed. Dietary pattern was derived employing partial least square discriminant analysis (PLS-DA) performed on energy-adjusted food frequencies derived from a 66-item food-frequency questionnaire. Odds ratios (ORs) and 95% confidence intervals (CIs) were estimated with multiple conditional logistic regression models, linking pattern scores and NPC risk. A high score of the PLS-DA derived pattern was characterized by high intakes of fruits, milk, fresh fish, vegetables, tea, and eggs ordered by loading values. We observed that one unit increase in the scores was associated with a significantly lower risk of NPC (ORadj = 0.73, 95% CI = 0.60–0.88) after controlling for potential confounders. Similar results were observed among Epstein-Barr virus seropositive subjects. An NPC protective diet is indicated with more phytonutrient-rich plant foods (fruits, vegetables), milk, other protein-rich foods (in particular fresh fish and eggs), and tea. This information may be used to design potential dietary regimen for NPC prevention. PMID:27249558

7. Multilocus Association Testing of Quantitative Traits Based on Partial Least-Squares Analysis

PubMed Central

Zhang, Feng; Guo, Xiong; Deng, Hong-Wen

2011-01-01

Because of combining the genetic information of multiple loci, multilocus association studies (MLAS) are expected to be more powerful than single locus association studies (SLAS) in disease genes mapping. However, some researchers found that MLAS had similar or reduced power relative to SLAS, which was partly attributed to the increased degrees of freedom (dfs) in MLAS. Based on partial least-squares (PLS) analysis, we develop a MLAS approach, while avoiding large dfs in MLAS. In this approach, genotypes are first decomposed into the PLS components that not only capture majority of the genetic information of multiple loci, but also are relevant for target traits. The extracted PLS components are then regressed on target traits to detect association under multilinear regression. Simulation study based on real data from the HapMap project were used to assess the performance of our PLS-based MLAS as well as other popular multilinear regression-based MLAS approaches under various scenarios, considering genetic effects and linkage disequilibrium structure of candidate genetic regions. Using PLS-based MLAS approach, we conducted a genome-wide MLAS of lean body mass, and compared it with our previous genome-wide SLAS of lean body mass. Simulations and real data analyses results support the improved power of our PLS-based MLAS in disease genes mapping relative to other three MLAS approaches investigated in this study. We aim to provide an effective and powerful MLAS approach, which may help to overcome the limitations of SLAS in disease genes mapping. PMID:21304821

8. Quantitative analysis of mixed hydrofluoric and nitric acids using Raman spectroscopy with partial least squares regression.

PubMed

Kang, Gumin; Lee, Kwangchil; Park, Haesung; Lee, Jinho; Jung, Youngjean; Kim, Kyoungsik; Son, Boongho; Park, Hyoungkuk

2010-06-15

Mixed hydrofluoric and nitric acids are widely used as a good etchant for the pickling process of stainless steels. The cost reduction and the procedure optimization in the manufacturing process can be facilitated by optically detecting the concentration of the mixed acids. In this work, we developed a novel method which allows us to obtain the concentrations of hydrofluoric acid (HF) and nitric acid (HNO(3)) mixture samples with high accuracy. The experiments were carried out for the mixed acids which consist of the HF (0.5-3wt%) and the HNO(3) (2-12wt%) at room temperature. Fourier Transform Raman spectroscopy has been utilized to measure the concentration of the mixed acids HF and HNO(3), because the mixture sample has several strong Raman bands caused by the vibrational mode of each acid in this spectrum. The calibration of spectral data has been performed using the partial least squares regression method which is ideal for local range data treatment. Several figures of merit (FOM) were calculated using the concept of net analyte signal (NAS) to evaluate performance of our methodology. PMID:20441916

9. Amplitude differences least squares method applied to temporal cardiac beat alignment

Correa, R. O.; Laciar, E.; Valentinuzzi, M. E.

2007-11-01

High resolution averaged ECG is an important diagnostic technique in post-infarcted and/or chagasic patients with high risk of ventricular tachycardia (VT). It calls for precise determination of the synchronism point (fiducial point) in each beat to be averaged. Cross-correlation (CC) between each detected beat and a reference beat is, by and large, the standard alignment procedure. However, the fiducial point determination is not precise in records contaminated with high levels of noise. Herein, we propose an alignment procedure based on the least squares calculation of the amplitude differences (LSAD) between the ECG samples and a reference or template beat. Both techniques, CC and LSAD, were tested in high resolution ECG's corrupted with white noise and 50 Hz line interference of varying amplitudes (RMS range: 0-100μV). Results point out that LSDA produced a lower alignment error in all contaminated records while in those blurred by power line interference better results were found only within the 0-40 μV range. It is concluded that the proposed method represents a valid alignment alternative.

10. Generalized Least-Squares CT Reconstruction with Detector Blur and Correlated Noise Models.

PubMed

Stayman, J Webster; Zbijewski, Wojciech; Tilley, Steven; Siewerdsen, Jeffrey

2014-03-19

The success and improved dose utilization of statistical reconstruction methods arises, in part, from their ability to incorporate sophisticated models of the physics of the measurement process and noise. Despite the great promise of statistical methods, typical measurement models ignore blurring effects, and nearly all current approaches make the presumption of independent measurements - disregarding noise correlations and a potential avenue for improved image quality. In some imaging systems, such as flat-panel-based cone-beam CT, such correlations and blurs can be a dominant factor in limiting the maximum achievable spatial resolution and noise performance. In this work, we propose a novel regularized generalized least-squares reconstruction method that includes models for both system blur and correlated noise in the projection data. We demonstrate, in simulation studies, that this approach can break through the traditional spatial resolution limits of methods that do not model these physical effects. Moreover, in comparison to other approaches that attempt deblurring without a correlation model, superior noise-resolution trade-offs can be found with the proposed approach. PMID:25328638

11. Prediction for human intelligence using morphometric characteristics of cortical surface: partial least square analysis.

PubMed

Yang, J-J; Yoon, U; Yun, H J; Im, K; Choi, Y Y; Lee, K H; Park, H; Hough, M G; Lee, J-M

2013-08-29

A number of imaging studies have reported neuroanatomical correlates of human intelligence with various morphological characteristics of the cerebral cortex. However, it is not yet clear whether these morphological properties of the cerebral cortex account for human intelligence. We assumed that the complex structure of the cerebral cortex could be explained effectively considering cortical thickness, surface area, sulcal depth and absolute mean curvature together. In 78 young healthy adults (age range: 17-27, male/female: 39/39), we used the full-scale intelligence quotient (FSIQ) and the cortical measurements calculated in native space from each subject to determine how much combining various cortical measures explained human intelligence. Since each cortical measure is thought to be not independent but highly inter-related, we applied partial least square (PLS) regression, which is one of the most promising multivariate analysis approaches, to overcome multicollinearity among cortical measures. Our results showed that 30% of FSIQ was explained by the first latent variable extracted from PLS regression analysis. Although it is difficult to relate the first derived latent variable with specific anatomy, we found that cortical thickness measures had a substantial impact on the PLS model supporting the most significant factor accounting for FSIQ. Our results presented here strongly suggest that the new predictor combining different morphometric properties of complex cortical structure is well suited for predicting human intelligence. PMID:23643979

12. Reconstruction of vibroacoustic responses of a highly nonspherical structure using Helmholtz equation least-squares method.

PubMed

Lu, Huancai; Wu, Sean F

2009-03-01

The vibroacoustic responses of a highly nonspherical vibrating object are reconstructed using Helmholtz equation least-squares (HELS) method. The objectives of this study are to examine the accuracy of reconstruction and the impacts of various parameters involved in reconstruction using HELS. The test object is a simply supported and baffled thin plate. The reason for selecting this object is that it represents a class of structures that cannot be exactly described by the spherical Hankel functions and spherical harmonics, which are taken as the basis functions in the HELS formulation, yet the analytic solutions to vibroacoustic responses of a baffled plate are readily available so the accuracy of reconstruction can be checked accurately. The input field acoustic pressures for reconstruction are generated by the Rayleigh integral. The reconstructed normal surface velocities are validated against the benchmark values, and the out-of-plane vibration patterns at several natural frequencies are compared with the natural modes of a simply supported plate. The impacts of various parameters such as number of measurement points, measurement distance, location of the origin of the coordinate system, microphone spacing, and ratio of measurement aperture size to the area of source surface of reconstruction on the resultant accuracy of reconstruction are examined. PMID:19275312

13. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data.

PubMed

Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P A; Schmid, Adrien W

2016-03-31

Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. PMID:26965325

14. Temporal parameter change of human postural control ability during upright swing using recursive least square method

Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi

2009-12-01

The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.

15. Temporal parameter change of human postural control ability during upright swing using recursive least square method

Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi

2010-01-01

The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.

16. Application of an iterative least-squares waveform inversion of strong-motion and teleseismic records to the 1978 Tabas, Iran, earthquake

USGS Publications Warehouse

Hartzell, S.; Mendoza, C.

1991-01-01

An iterative least-squares technique is used to simultaneously invert the strong-motion records and teleseismic P waveforms for the 1978 Tabas, Iran, earthquake to deduce the rupture history. The effects of using different data sets and different parametrizations of the problem (linear versus nonlinear) are considered. A consensus of all the inversion runs indicates a complex, multiple source for the Tabas earthquake, with four main source regions over a fault length of 90 km and an average rupture velocity of 2.5 km/sec. -from Authors

17. Prediction of CO concentrations based on a hybrid Partial Least Square and Support Vector Machine model

Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.

2012-08-01

Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.

18. Phase-space finite elements in a least-squares solution of the transport equation

SciTech Connect

Drumm, C.; Fan, W.; Pautz, S.

2013-07-01

The linear Boltzmann transport equation is solved using a least-squares finite element approximation in the space, angular and energy phase-space variables. The method is applied to both neutral particle transport and also to charged particle transport in the presence of an electric field, where the angular and energy derivative terms are handled with the energy/angular finite elements approximation, in a manner analogous to the way the spatial streaming term is handled. For multi-dimensional problems, a novel approach is used for the angular finite elements: mapping the surface of a unit sphere to a two-dimensional planar region and using a meshing tool to generate a mesh. In this manner, much of the spatial finite-elements machinery can be easily adapted to handle the angular variable. The energy variable and the angular variable for one-dimensional problems make use of edge/beam elements, also building upon the spatial finite elements capabilities. The methods described here can make use of either continuous or discontinuous finite elements in space, angle and/or energy, with the use of continuous finite elements resulting in a smaller problem size and the use of discontinuous finite elements resulting in more accurate solutions for certain types of problems. The work described in this paper makes use of continuous finite elements, so that the resulting linear system is symmetric positive definite and can be solved with a highly efficient parallel preconditioned conjugate gradients algorithm. The phase-space finite elements capability has been built into the Sceptre code and applied to several test problems, including a simple one-dimensional problem with an analytic solution available, a two-dimensional problem with an isolated source term, showing how the method essentially eliminates ray effects encountered with discrete ordinates, and a simple one-dimensional charged-particle transport problem in the presence of an electric field. (authors)

19. Statistical CT noise reduction with multiscale decomposition and penalized weighted least squares in the projection domain

SciTech Connect

Tang Shaojie; Tang Xiangyang

2012-09-15

Purposes: The suppression of noise in x-ray computed tomography (CT) imaging is of clinical relevance for diagnostic image quality and the potential for radiation dose saving. Toward this purpose, statistical noise reduction methods in either the image or projection domain have been proposed, which employ a multiscale decomposition to enhance the performance of noise suppression while maintaining image sharpness. Recognizing the advantages of noise suppression in the projection domain, the authors propose a projection domain multiscale penalized weighted least squares (PWLS) method, in which the angular sampling rate is explicitly taken into consideration to account for the possible variation of interview sampling rate in advanced clinical or preclinical applications. Methods: The projection domain multiscale PWLS method is derived by converting an isotropic diffusion partial differential equation in the image domain into the projection domain, wherein a multiscale decomposition is carried out. With adoption of the Markov random field or soft thresholding objective function, the projection domain multiscale PWLS method deals with noise at each scale. To compensate for the degradation in image sharpness caused by the projection domain multiscale PWLS method, an edge enhancement is carried out following the noise reduction. The performance of the proposed method is experimentally evaluated and verified using the projection data simulated by computer and acquired by a CT scanner. Results: The preliminary results show that the proposed projection domain multiscale PWLS method outperforms the projection domain single-scale PWLS method and the image domain multiscale anisotropic diffusion method in noise reduction. In addition, the proposed method can preserve image sharpness very well while the occurrence of 'salt-and-pepper' noise and mosaic artifacts can be avoided. Conclusions: Since the interview sampling rate is taken into account in the projection domain

20. Tropospheric refractivity and zenith path delays from least-squares collocation of meteorological and GNSS data