Science.gov

Sample records for nonlinear least-squares fitting

  1. On Least Squares Fitting Nonlinear Submodels.

    ERIC Educational Resources Information Center

    Bechtel, Gordon G.

    Three simplifying conditions are given for obtaining least squares (LS) estimates for a nonlinear submodel of a linear model. If these are satisfied, and if the subset of nonlinear parameters may be LS fit to the corresponding LS estimates of the linear model, then one attains the desired LS estimates for the entire submodel. Two illustrative…

  2. Nonlinear least-squares data fitting in Excel spreadsheets.

    PubMed

    Kemmer, Gerdi; Keller, Sandro

    2010-02-01

    We describe an intuitive and rapid procedure for analyzing experimental data by nonlinear least-squares fitting (NLSF) in the most widely used spreadsheet program. Experimental data in x/y form and data calculated from a regression equation are inputted and plotted in a Microsoft Excel worksheet, and the sum of squared residuals is computed and minimized using the Solver add-in to obtain the set of parameter values that best describes the experimental data. The confidence of best-fit values is then visualized and assessed in a generally applicable and easily comprehensible way. Every user familiar with the most basic functions of Excel will be able to implement this protocol, without previous experience in data fitting or programming and without additional costs for specialist software. The application of this tool is exemplified using the well-known Michaelis-Menten equation characterizing simple enzyme kinetics. Only slight modifications are required to adapt the protocol to virtually any other kind of dataset or regression equation. The entire protocol takes approximately 1 h.

  3. AKLSQF - LEAST SQUARES CURVE FITTING

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

  4. Deming's General Least Square Fitting

    1992-02-18

    DEM4-26 is a generalized least square fitting program based on Deming''s method. Functions built into the program for fitting include linear, quadratic, cubic, power, Howard''s, exponential, and Gaussian; others can easily be added. The program has the following capabilities: (1) entry, editing, and saving of data; (2) fitting of any of the built-in functions or of a user-supplied function; (3) plotting the data and fitted function on the display screen, with error limits if requested,more » and with the option of copying the plot to the printer; (4) interpolation of x or y values from the fitted curve with error estimates based on error limits selected by the user; and (5) plotting the residuals between the y data values and the fitted curve, with the option of copying the plot to the printer. If the plot is to be copied to a printer, GRAPHICS should be called from the operating system disk before the BASIC interpreter is loaded.« less

  5. Deming's General Least Square Fitting

    SciTech Connect

    Rinard, Phillip

    1992-02-18

    DEM4-26 is a generalized least square fitting program based on Deming''s method. Functions built into the program for fitting include linear, quadratic, cubic, power, Howard''s, exponential, and Gaussian; others can easily be added. The program has the following capabilities: (1) entry, editing, and saving of data; (2) fitting of any of the built-in functions or of a user-supplied function; (3) plotting the data and fitted function on the display screen, with error limits if requested, and with the option of copying the plot to the printer; (4) interpolation of x or y values from the fitted curve with error estimates based on error limits selected by the user; and (5) plotting the residuals between the y data values and the fitted curve, with the option of copying the plot to the printer. If the plot is to be copied to a printer, GRAPHICS should be called from the operating system disk before the BASIC interpreter is loaded.

  6. Least-Squares Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1990-01-01

    Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.

  7. Lmfit: Non-Linear Least-Square Minimization and Curve-Fitting for Python

    NASA Astrophysics Data System (ADS)

    Newville, Matthew; Stensitzki, Till; Allen, Daniel B.; Rawlik, Michal; Ingargiola, Antonino; Nelson, Andrew

    2016-06-01

    Lmfit provides a high-level interface to non-linear optimization and curve fitting problems for Python. Lmfit builds on and extends many of the optimization algorithm of scipy.optimize, especially the Levenberg-Marquardt method from optimize.leastsq. Its enhancements to optimization and data fitting problems include using Parameter objects instead of plain floats as variables, the ability to easily change fitting algorithms, and improved estimation of confidence intervals and curve-fitting with the Model class. Lmfit includes many pre-built models for common lineshapes.

  8. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  9. BLS: Box-fitting Least Squares

    NASA Astrophysics Data System (ADS)

    Kovács, G.; Zucker, S.; Mazeh, T.

    2016-07-01

    BLS (Box-fitting Least Squares) is a box-fitting algorithm that analyzes stellar photometric time series to search for periodic transits of extrasolar planets. It searches for signals characterized by a periodic alternation between two discrete levels, with much less time spent at the lower level.

  10. Least-squares fitting Gompertz curve

    NASA Astrophysics Data System (ADS)

    Jukic, Dragan; Kralik, Gordana; Scitovski, Rudolf

    2004-08-01

    In this paper we consider the least-squares (LS) fitting of the Gompertz curve to the given nonconstant data (pi,ti,yi), i=1,...,m, m≥3. We give necessary and sufficient conditions which guarantee the existence of the LS estimate, suggest a choice of a good initial approximation and give some numerical examples.

  11. Quantitative Evaluation of Cross-Peak Volumes in Multidimensional Spectra by Nonlinear-Least-Squares Curve Fitting

    NASA Astrophysics Data System (ADS)

    Sze, K. H.; Barsukov, I. L.; Roberts, G. C. K.

    A procedure for quantitative evaluation of cross-peak volumes in spectra of any order of dimensions is described; this is based on a generalized algorithm for combining appropriate one-dimensional integrals obtained by nonlinear-least-squares curve-fitting techniques. This procedure is embodied in a program, NDVOL, which has three modes of operation: a fully automatic mode, a manual mode for interactive selection of fitting parameters, and a fast reintegration mode. The procedures used in the NDVOL program to obtain accurate volumes for overlapping cross peaks are illustrated using various simulated overlapping cross-peak patterns. The precision and accuracy of the estimates of cross-peak volumes obtained by application of the program to these simulated cross peaks and to a back-calculated 2D NOESY spectrum of dihydrofolate reductase are presented. Examples are shown of the use of the program with real 2D and 3D data. It is shown that the program is able to provide excellent estimates of volume even for seriously overlapping cross peaks with minimal intervention by the user.

  12. GaussFit: Solving least squares and robust estimation problems

    NASA Astrophysics Data System (ADS)

    Jefferys, William; McArthur, Barbara; McCartney, James

    2013-05-01

    GaussFit solves least squares and robust estimation problems; written originally for reduction of NASA Hubble Space Telescope data, it includes a complete programming language designed especially to formulate estimation problems, a built-in compiler and interpreter to support the programming language, and a built-in algebraic manipulator for calculating the required partial derivatives analytically. The code can handle nonlinear models, exact constraints, correlated observations, and models where the equations of condition contain more than one observed quantity. Written in C, GaussFit includes an experimental robust estimation capability so data sets contaminated by outliers can be handled simply and efficiently.

  13. A Genetic Algorithm Approach to Nonlinear Least Squares Estimation

    ERIC Educational Resources Information Center

    Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.

    2004-01-01

    A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…

  14. Regression, least squares, and the general version of inclusive fitness.

    PubMed

    Rousset, François

    2015-11-01

    A general version of inclusive fitness based on regression is rederived with minimal mathematics and directly from the verbal interpretation of its terms that motivated the original formulation of the inclusive fitness concept. This verbal interpretation is here extended to provide the two relationships required to determine the two coefficients -c and b. These coefficients retain their definition as expected effects on the fitness of an individual, respectively of a change in allelic state of this individual, and of correlated change in allelic state of social partners. The known least-squares formulation of the relationships determining b and c can be immediately deduced and shown to be equivalent to this new formulation. These results make clear that criticisms of the mathematical tools (in particular least-squares regression) previously used to derive this version of inclusive fitness are misdirected.

  15. Nonlinear least squares - An aid to thermal property determination

    NASA Technical Reports Server (NTRS)

    Curry, D. M.; Williams, S. D.

    1972-01-01

    Nonlinear least squares techniques can be used to determine effective thermal conductivity values from experimental data. Comparisons between measured and predicted conductivity values indicate that the analytically determined values can be used with confidence in performing thermal protection system analyses. A study was performed to compare the relative efficiencies of different minimizing techniques; techniques; the method of Peckham was the most efficient.

  16. Multisplitting for linear, least squares and nonlinear problems

    SciTech Connect

    Renaut, R.

    1996-12-31

    In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.

  17. Faraday rotation data analysis with least-squares elliptical fitting

    SciTech Connect

    White, Adam D.; McHale, G. Brent; Goerz, David A.; Speer, Ron D.

    2010-10-15

    A method of analyzing Faraday rotation data from pulsed magnetic field measurements is described. The method uses direct least-squares elliptical fitting to measured data. The least-squares fit conic parameters are used to rotate, translate, and rescale the measured data. Interpretation of the transformed data provides improved accuracy and time-resolution characteristics compared with many existing methods of analyzing Faraday rotation data. The method is especially useful when linear birefringence is present at the input or output of the sensing medium, or when the relative angle of the polarizers used in analysis is not aligned with precision; under these circumstances the method is shown to return the analytically correct input signal. The method may be pertinent to other applications where analysis of Lissajous figures is required, such as the velocity interferometer system for any reflector (VISAR) diagnostics. The entire algorithm is fully automated and requires no user interaction. An example of algorithm execution is shown, using data from a fiber-based Faraday rotation sensor on a capacitive discharge experiment.

  18. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  19. Assessing Fit and Dimensionality in Least Squares Metric Multidimensional Scaling Using Akaike's Information Criterion

    ERIC Educational Resources Information Center

    Ding, Cody S.; Davison, Mark L.

    2010-01-01

    Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…

  20. The recovery of weak impulsive signals based on stochastic resonance and moving least squares fitting.

    PubMed

    Jiang, Kuosheng; Xu, Guanghua; Liang, Lin; Tao, Tangfei; Gu, Fengshou

    2014-07-29

    In this paper a stochastic resonance (SR)-based method for recovering weak impulsive signals is developed for quantitative diagnosis of faults in rotating machinery. It was shown in theory that weak impulsive signals follow the mechanism of SR, but the SR produces a nonlinear distortion of the shape of the impulsive signal. To eliminate the distortion a moving least squares fitting method is introduced to reconstruct the signal from the output of the SR process. This proposed method is verified by comparing its detection results with that of a morphological filter based on both simulated and experimental signals. The experimental results show that the background noise is suppressed effectively and the key features of impulsive signals are reconstructed with a good degree of accuracy, which leads to an accurate diagnosis of faults in roller bearings in a run-to failure test.

  1. For Which X-Values Does a Least-Squares Parabolic Fit Exist?

    ERIC Educational Resources Information Center

    Farnsworth, David L.

    2005-01-01

    The normal equations discussed in this paper for a least-squares parabolic fit have a unique solution if and only if there are at least three different x-values in the observations. This requirement is satisfied by most real sets of quantitative observations. For particular data sets, the appropriateness of parabolic fits should be assessed with…

  2. Representing Topography with Second-Degree Bivariate Polynomial Functions Fitted by Least Squares.

    ERIC Educational Resources Information Center

    Neuman, Arthur Edward

    1987-01-01

    There is a need for abstracting topography other than for mapping purposes. The method employed should be simple and available to non-specialists, thereby ruling out spline representations. Generalizing from univariate first-degree least squares and from multiple regression, this article introduces bivariate polynomial functions fitted by least…

  3. Geometry of nonlinear least squares with applications to sloppy models and optimization.

    PubMed

    Transtrum, Mark K; Machta, Benjamin B; Sethna, James P

    2011-03-01

    Parameter estimation by nonlinear least-squares minimization is a common problem that has an elegant geometric interpretation: the possible parameter values of a model induce a manifold within the space of data predictions. The minimization problem is then to find the point on the manifold closest to the experimental data. We show that the model manifolds of a large class of models, known as sloppy models, have many universal features; they are characterized by a geometric series of widths, extrinsic curvatures, and parameter-effect curvatures, which we describe as a hyper-ribbon. A number of common difficulties in optimizing least-squares problems are due to this common geometric structure. First, algorithms tend to run into the boundaries of the model manifold, causing parameters to diverge or become unphysical before they have been optimized. We introduce the model graph as an extension of the model manifold to remedy this problem. We argue that appropriate priors can remove the boundaries and further improve the convergence rates. We show that typical fits will have many evaporated parameters unless the data are very accurately known. Second, "bare" model parameters are usually ill-suited to describing model behavior; cost contours in parameter space tend to form hierarchies of plateaus and long narrow canyons. Geometrically, we understand this inconvenient parametrization as an extremely skewed coordinate basis and show that it induces a large parameter-effect curvature on the manifold. By constructing alternative coordinates based on geodesic motion, we show that these long narrow canyons are transformed in many cases into a single quadratic, isotropic basin. We interpret the modified Gauss-Newton and Levenberg-Marquardt fitting algorithms as an Euler approximation to geodesic motion in these natural coordinates on the model manifold and the model graph, respectively. By adding a geodesic acceleration adjustment to these algorithms, we alleviate the

  4. Constrained hierarchical least square nonlinear equation solvers. [for indefinite stiffness and large structural deformations

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Lackney, J.

    1986-01-01

    The current paper develops a constrained hierarchical least square nonlinear equation solver. The procedure can handle the response behavior of systems which possess indefinite tangent stiffness characteristics. Due to the generality of the scheme, this can be achieved at various hierarchical application levels. For instance, in the case of finite element simulations, various combinations of either degree of freedom, nodal, elemental, substructural, and global level iterations are possible. Overall, this enables a solution methodology which is highly stable and storage efficient. To demonstrate the capability of the constrained hierarchical least square methodology, benchmarking examples are presented which treat structure exhibiting highly nonlinear pre- and postbuckling behavior wherein several indefinite stiffness transitions occur.

  5. A quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints

    NASA Technical Reports Server (NTRS)

    Hanson, R. J.; Krogh, Fred T.

    1992-01-01

    A new algorithm for solving nonlinear least-squares and nonlinear equation problems is proposed which is based on approximating the nonlinear functions using the quadratic-tensor model by Schnabel and Frank. The algorithm uses a trust region defined by a box containing the current values of the unknowns. The algorithm is found to be effective for problems with linear constraints and dense Jacobian matrices.

  6. Optimal Knot Selection for Least-squares Fitting of Noisy Data with Spline Functions

    SciTech Connect

    Jerome Blair

    2008-05-15

    An automatic data-smoothing algorithm for data from digital oscilloscopes is described. The algorithm adjusts the bandwidth of the filtering as a function of time to provide minimum mean squared error at each time. It produces an estimate of the root-mean-square error as a function of time and does so without any statistical assumptions about the unknown signal. The algorithm is based on least-squares fitting to the data of cubic spline functions.

  7. A new algorithm for constrained nonlinear least-squares problems, part 1

    NASA Technical Reports Server (NTRS)

    Hanson, R. J.; Krogh, F. T.

    1983-01-01

    A Gauss-Newton algorithm is presented for solving nonlinear least squares problems. The problem statement may include simple bounds or more general constraints on the unknowns. The algorithm uses a trust region that allows the objective function to increase with logic for retreating to best values. The computations for the linear problem are done using a least squares system solver that allows for simple bounds and linear constraints. The trust region limits are defined by a box around the current point. In its current form the algorithm is effective only for problems with small residuals, linear constraints and dense Jacobian matrices. Results on a set of test problems are encouraging.

  8. Fitting of dihedral terms in classical force fields as an analytic linear least-squares problem.

    PubMed

    Hopkins, Chad W; Roitberg, Adrian E

    2014-07-28

    The derivation and optimization of most energy terms in modern force fields are aided by automated computational tools. It is therefore important to have algorithms to rapidly and precisely train large numbers of interconnected parameters to allow investigators to make better decisions about the content of molecular models. In particular, the traditional approach to deriving dihedral parameters has been a least-squares fit to target conformational energies through variational optimization strategies. We present a computational approach for simultaneously fitting force field dihedral amplitudes and phase constants which is analytic within the scope of the data set. This approach completes the optimal molecular mechanics representation of a quantum mechanical potential energy surface in a single linear least-squares fit by recasting the dihedral potential into a linear function in the parameters. We compare the resulting method to a genetic algorithm in terms of computational time and quality of fit for two simple molecules. As suggested in previous studies, arbitrary dihedral phases are only necessary when modeling chiral molecules, which include more than half of drugs currently in use, so we also examined a dihedral parametrization case for the drug amoxicillin and one of its stereoisomers where the target dihedral includes a chiral center. Asymmetric dihedral phases are needed in these types of cases to properly represent the quantum mechanical energy surface and to differentiate between stereoisomers about the chiral center.

  9. LOGISTIC FUNCTION PROFILE FIT: A least-squares program for fitting interface profiles to an extended logistic function

    SciTech Connect

    Kirchhoff, William H.

    2012-09-15

    The extended logistic function provides a physically reasonable description of interfaces such as depth profiles or line scans of surface topological or compositional features. It describes these interfaces with the minimum number of parameters, namely, position, width, and asymmetry. Logistic Function Profile Fit (LFPF) is a robust, least-squares fitting program in which the nonlinear extended logistic function is linearized by a Taylor series expansion (equivalent to a Newton-Raphson approach) with no apparent introduction of bias in the analysis. The program provides reliable confidence limits for the parameters when systematic errors are minimal and provides a display of the residuals from the fit for the detection of systematic errors. The program will aid researchers in applying ASTM E1636-10, 'Standard practice for analytically describing sputter-depth-profile and linescan-profile data by an extended logistic function,' and may also prove useful in applying ISO 18516: 2006, 'Surface chemical analysis-Auger electron spectroscopy and x-ray photoelectron spectroscopy-determination of lateral resolution.' Examples are given of LFPF fits to a secondary ion mass spectrometry depth profile, an Auger surface line scan, and synthetic data generated to exhibit known systematic errors for examining the significance of such errors to the extrapolation of partial profiles.

  10. The derivation of vector magnetic fields from Stokes profiles - Integral versus least squares fitting techniques

    NASA Technical Reports Server (NTRS)

    Ronan, R. S.; Mickey, D. L.; Orrall, F. Q.

    1987-01-01

    The results of two methods for deriving photospheric vector magnetic fields from the Zeeman effect, as observed in the Fe I line at 6302.5 A at high spectral resolution (45 mA), are compared. The first method does not take magnetooptical effects into account, but determines the vector magnetic field from the integral properties of the Stokes profiles. The second method is an iterative least-squares fitting technique which fits the observed Stokes profiles to the profiles predicted by the Unno-Rachkovsky solution to the radiative transfer equation. For sunspot fields above about 1500 gauss, the two methods are found to agree in derived azimuthal and inclination angles to within about + or - 20 deg.

  11. Error Estimates Derived from the Data for Least-Squares Spline Fitting

    SciTech Connect

    Jerome Blair

    2007-06-25

    The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

  12. Phase aberration compensation of digital holographic microscopy based on least squares surface fitting

    NASA Astrophysics Data System (ADS)

    Di, Jianglei; Zhao, Jianlin; Sun, Weiwei; Jiang, Hongzhen; Yan, Xiaobo

    2009-10-01

    Digital holographic microscopy allows the numerical reconstruction of the complex wavefront of samples, especially biological samples such as living cells. In digital holographic microscopy, a microscope objective is introduced to improve the transverse resolution of the sample; however a phase aberration in the object wavefront is also brought along, which will affect the phase distribution of the reconstructed image. We propose here a numerical method to compensate for the phase aberration of thin transparent objects with a single hologram. The least squares surface fitting with points number less than the matrix of the original hologram is performed on the unwrapped phase distribution to remove the unwanted wavefront curvature. The proposed method is demonstrated with the samples of the cicada wings and epidermal cells of garlic, and the experimental results are consistent with that of the double exposure method.

  13. Metafitting: Weight optimization for least-squares fitting of PTTI data

    NASA Technical Reports Server (NTRS)

    Douglas, Rob J.; Boulanger, J.-S.

    1995-01-01

    For precise time intercomparisons between a master frequency standard and a slave time scale, we have found it useful to quantitatively compare different fitting strategies by examining the standard uncertainty in time or average frequency. It is particularly useful when designing procedures which use intermittent intercomparisons, with some parameterized fit used to interpolate or extrapolate from the calibrating intercomparisons. We use the term 'metafitting' for the choices that are made before a fitting procedure is operationally adopted. We present methods for calculating the standard uncertainty for general, weighted least-squares fits and a method for optimizing these weights for a general noise model suitable for many PTTI applications. We present the results of the metafitting of procedures for the use of a regular schedule of (hypothetical) high-accuracy frequency calibration of a maser time scale. We have identified a cumulative series of improvements that give a significant reduction of the expected standard uncertainty, compared to the simplest procedure of resetting the maser synthesizer after each calibration. The metafitting improvements presented include the optimum choice of weights for the calibration runs, optimized over a period of a week or 10 days.

  14. [Compensation-fitting extraction of dynamic spectrum based on least squares method].

    PubMed

    Lin, Ling; Wu, Ruo-Nan; Li, Yong-Cheng; Zhou, Mei; Li, Gang

    2014-07-01

    Extraction method for dynamic spectrum (DS) with a high signal to noise ratio is a key to achieving high-precision noninvasive detection of blood components. In order to further improve the accuracy and speed of DS extraction, linear similarity between photoelectric plethysmographys (PPG) at each two different wavelengths was analyzed in principle, and an experimental verification was conducted. Based on this property, the method of compensation-fitting extraction was proposed. Firstly, the baseline of PPG at each wavelength is estimated and compensated using single-period sampling average, which would remove the effect of baseline drift caused by motion artifact. Secondly, the slope of least squares fitting between each single-wavelength PPG and full-wavelength averaged PPG is acquired to construct DS, which would significantly suppress random noise. Contrast experiments were conducted on 25 samples in NIR wave band and Vis wave band respectively. Flatness and processing time of DS using compensation-fitting extraction were compared with that using single-trial estimation. In NIR band, the average variance using compensation-fitting estimation was 69.0% of that using single-trial estimation, and in Vis band it was 57.4%, which shows that the flatness of DS is steadily improved. In NIR band, the data processing time using compensation-fitting extraction could be reduced to 10% of that using single-trial estimation, and in Vis band it was 20%, which shows that the time for data processing is significantly reduced. Experimental results show that, compared with single-trial estimation method, dynamic spectrum compensation-fitting extraction could steadily improve the signal to noise ratio of DS, significantly improve estimation quality, reduce data processing time, and simplify procedure. Therefore, this new method is expected to promote the development of noninvasive blood components measurement.

  15. AVIRIS study of Death Valley evaporite deposits using least-squares band-fitting methods

    NASA Technical Reports Server (NTRS)

    Crowley, J. K.; Clark, R. N.

    1992-01-01

    Minerals found in playa evaporite deposits reflect the chemically diverse origins of ground waters in arid regions. Recently, it was discovered that many playa minerals exhibit diagnostic visible and near-infrared (0.4-2.5 micron) absorption bands that provide a remote sensing basis for observing important compositional details of desert ground water systems. The study of such systems is relevant to understanding solute acquisition, transport, and fractionation processes that are active in the subsurface. Observations of playa evaporites may also be useful for monitoring the hydrologic response of desert basins to changing climatic conditions on regional and global scales. Ongoing work using Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data to map evaporite minerals in the Death Valley salt pan is described. The AVIRIS data point to differences in inflow water chemistry in different parts of the Death Valley playa system and have led to the discovery of at least two new North American mineral occurrences. Seven segments of AVIRIS data were acquired over Death Valley on 31 July 1990, and were calibrated to reflectance by using the spectrum of a uniform area of alluvium near the salt pan. The calibrated data were subsequently analyzed by using least-squares spectral band-fitting methods, first described by Clark and others. In the band-fitting procedure, AVIRIS spectra are fit compared over selected wavelength intervals to a series of library reference spectra. Output images showing the degree of fit, band depth, and fit times the band depth are generated for each reference spectrum. The reference spectra used in the study included laboratory data for 35 pure evaporite spectra extracted from the AVIRIS image cube. Additional details of the band-fitting technique are provided by Clark and others elsewhere in this volume.

  16. Numerical solution of a nonlinear least squares problem in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Landi, G.; Loli Piccolomini, E.; Nagy, J. G.

    2015-11-01

    In digital tomosynthesis imaging, multiple projections of an object are obtained along a small range of different incident angles in order to reconstruct a pseudo-3D representation (i.e., a set of 2D slices) of the object. In this paper we describe some mathematical models for polyenergetic digital breast tomosynthesis image reconstruction that explicitly takes into account various materials composing the object and the polyenergetic nature of the x-ray beam. A polyenergetic model helps to reduce beam hardening artifacts, but the disadvantage is that it requires solving a large-scale nonlinear ill-posed inverse problem. We formulate the image reconstruction process (i.e., the method to solve the ill-posed inverse problem) in a nonlinear least squares framework, and use a Levenberg-Marquardt scheme to solve it. Some implementation details are discussed, and numerical experiments are provided to illustrate the performance of the methods.

  17. Least Squares Procedures.

    ERIC Educational Resources Information Center

    Hester, Yvette

    Least squares methods are sophisticated mathematical curve fitting procedures used in all classical parametric methods. The linear least squares approximation is most often associated with finding the "line of best fit" or the regression line. Since all statistical analyses are correlational and all classical parametric methods are least square…

  18. Comparison and Analysis of Nonlinear Least Squares Methods for Vision Based Navigation (vbn) Algorithms

    NASA Astrophysics Data System (ADS)

    Sheta, B.; Elhabiby, M.; Sheimy, N.

    2012-07-01

    A robust scale and rotation invariant image matching algorithm is vital for the Visual Based Navigation (VBN) of aerial vehicles, where matches between an existing geo-referenced database images and the real-time captured images are used to georeference (i.e. six transformation parameters - three rotation and three translation) the real-time captured image from the UAV through the collinearity equations. The georeferencing information is then used in aiding the INS integration Kalman filter as Coordinate UPdaTe (CUPT). It is critical for the collinearity equations to use the proper optimization algorithm to ensure accurate and fast convergence for georeferencing parameters with the minimum required conjugate points necessary for convergence. Fast convergence to a global minimum will require non-linear approach to overcome the high degree of non-linearity that will exist in case of having large oblique images (i.e. large rotation angles).The main objective of this paper is investigating the estimation of the georeferencing parameters necessary for VBN of aerial vehicles in case of having large values of the rotational angles, which will lead to non-linearity of the estimation model. In this case, traditional least squares approaches will fail to estimate the georeferencing parameters, because of the expected non-linearity of the mathematical model. Five different nonlinear least squares methods are presented for estimating the transformation parameters. Four gradient based nonlinear least squares methods (Trust region, Trust region dogleg algorithm, Levenberg-Marquardt, and Quasi-Newton line search method) and one non-gradient method (Nelder-Mead simplex direct search) is employed for the six transformation parameters estimation process. The research was done on simulated data and the results showed that the Nelder-Mead method has failed because of its dependency on the objective function without any derivative information. Although, the tested gradient methods

  19. Least-squares fitting of time-domain signals for Fourier transform mass spectrometry.

    PubMed

    Aushev, Tagir; Kozhinov, Anton N; Tsybin, Yury O

    2014-07-01

    To advance Fourier transform mass spectrometry (FTMS)-based molecular structure analysis, corresponding development of the FTMS signal processing methods and instrumentation is required. Here, we demonstrate utility of a least-squares fitting (LSF) method for analysis of FTMS time-domain (transient) signals. We evaluate the LSF method in the analysis of single- and multiple-component experimental and simulated ion cyclotron resonance (ICR) and Orbitrap FTMS transient signals. Overall, the LSF method allows one to estimate the analytical limits of the conventional instrumentation and signal processing methods in FTMS. Particularly, LSF provides accurate information on initial phases of sinusoidal components in a given transient. For instance, the phase distribution obtained for a statistical set of experimental transients reveals the effect of the first data-point problem in FT-ICR MS. Additionally, LSF might be useful to improve the implementation of the absorption-mode FT spectral representation for FTMS applications. Finally, LSF can find utility in characterization and development of filter-diagonalization method (FDM) MS.

  20. Determination of glucose concentration based on pulsed laser induced photoacoustic technique and least square fitting algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2015-08-01

    In this paper, a noninvasive glucose concentration monitoring setup based on the photoacoustic technique was established. In this setup, a 532nm pumped Q switched Nd: YAG tunable pulsed laser with repetition rate of 20Hz was used as the photoacoustic excitation light source, and a ultrasonic transducer with central response frequency of 9.55MHz was used as the detector of the photoacoustic signal of glucose. As the preliminary exploration of the blood glucose concentration, a series of in vitro photoacoustic monitoring of glucose aqueous solutions by using the established photoacoustic setup were performed. The photoacoustic peak-to-peak values of different concentrations of glucose aqueous solutions induced by the pulsed laser with output wavelength of 1300nm to 2300nm in interval of 10nm were obtained with the average times of 512. The differential spectral and the first order derivative spectral method were used to get the characteristic wavelengths. For the characteristic wavelengths of glucose, the least square fitting algorithm was used to establish the relationship between the glucose concentrations and photoacoustic peak-to-peak values. The characteristic wavelengths and the predicted concentrations of glucose solution were obtained. Experimental results demonstrated that the prediction effect of characteristic wavelengths of 1410nm and 1510nm were better than others, and this photoacoustic setup and analysis method had a certain potential value in the monitoring of the blood glucose concentration.

  1. On the roles of minimization and linearization in least-squares finite element models of nonlinear boundary-value problems

    NASA Astrophysics Data System (ADS)

    Payette, G. S.; Reddy, J. N.

    2011-05-01

    In this paper we examine the roles of minimization and linearization in the least-squares finite element formulations of nonlinear boundary-values problems. The least-squares principle is based upon the minimization of the least-squares functional constructed via the sum of the squares of appropriate norms of the residuals of the partial differential equations (in the present case we consider L2 norms). Since the least-squares method is independent of the discretization procedure and the solution scheme, the least-squares principle suggests that minimization should be performed prior to linearization, where linearization is employed in the context of either the Picard or Newton iterative solution procedures. However, in the least-squares finite element analysis of nonlinear boundary-value problems, it has become common practice in the literature to exchange the sequence of application of the minimization and linearization operations. The main purpose of this study is to provide a detailed assessment on how the finite element solution is affected when the order of application of these operators is interchanged. The assessment is performed mathematically, through an examination of the variational setting for the least-squares formulation of an abstract nonlinear boundary-value problem, and also computationally, through the numerical simulation of the least-squares finite element solutions of both a nonlinear form of the Poisson equation and also the incompressible Navier-Stokes equations. The assessment suggests that although the least-squares principle indicates that minimization should be performed prior to linearization, such an approach is often impractical and not necessary.

  2. A Nonlinear Least Squares Approach to Time of Death Estimation Via Body Cooling.

    PubMed

    Rodrigo, Marianito R

    2016-01-01

    The problem of time of death (TOD) estimation by body cooling is revisited by proposing a nonlinear least squares approach that takes as input a series of temperature readings only. Using a reformulation of the Marshall-Hoare double exponential formula and a technique for reducing the dimension of the state space, an error function that depends on the two cooling rates is constructed, with the aim of minimizing this function. Standard nonlinear optimization methods that are used to minimize the bivariate error function require an initial guess for these unknown rates. Hence, a systematic procedure based on the given temperature data is also proposed to determine an initial estimate for the rates. Then, an explicit formula for the TOD is given. Results of numerical simulations using both theoretical and experimental data are presented, both yielding reasonable estimates. The proposed procedure does not require knowledge of the temperature at death nor the body mass. In fact, the method allows the estimation of the temperature at death once the cooling rates and the TOD have been calculated. The procedure requires at least three temperature readings, although more measured readings could improve the estimates. With the aid of computerized recording and thermocouple detectors, temperature readings spaced 10-15 min apart, for example, can be taken. The formulas can be straightforwardly programmed and installed on a hand-held device for field use.

  3. Analysis of magnetic measurement data by least squares fit to series expansion solution of 3-D Laplace equation

    SciTech Connect

    Blumberg, L.N.

    1992-03-01

    The authors have analyzed simulated magnetic measurements data for the SXLS bending magnet in a plane perpendicular to the reference axis at the magnet midpoint by fitting the data to an expansion solution of the 3-dimensional Laplace equation in curvilinear coordinates as proposed by Brown and Servranckx. The method of least squares is used to evaluate the expansion coefficients and their uncertainties, and compared to results from an FFT fit of 128 simulated data points on a 12-mm radius circle about the reference axis. They find that the FFT method gives smaller coefficient uncertainties that the Least Squares method when the data are within similar areas. The Least Squares method compares more favorably when a larger number of data points are used within a rectangular area of 30-mm vertical by 60-mm horizontal--perhaps the largest area within the 35-mm x 75-mm vacuum chamber for which data could be obtained. For a grid with 0.5-mm spacing within the 30 x 60 mm area the Least Squares fit gives much smaller uncertainties than the FFT. They are therefore in the favorable position of having two methods which can determine the multipole coefficients to much better accuracy than the tolerances specified to General Dynamics. The FFT method may be preferable since it requires only one Hall probe rather than the four envisioned for the least squares grid data. However least squares can attain better accuracy with fewer probe movements. The time factor in acquiring the data will likely be the determining factor in choice of method. They should further explore least squares analysis of a Fourier expansion of data on a circle or arc of a circle since that method gives coefficient uncertainties without need for multiple independent sets of data as needed by the FFT method.

  4. Multiparameter linear least-squares fitting to Poisson data one count at a time

    NASA Astrophysics Data System (ADS)

    Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.

    1995-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number ni of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts wi in the weighted LLSQ method when square root of ni instead of square root of bar-ni is used to approximate the uncertainties, sigmai, in the data, where bar-ni = E(ni), the expected value of Ni. We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 keV, HEAO 3) energy channels of a Ge spectrometer, where

  5. Multiparameter linear least-squares fitting to Poisson data one count at a time

    NASA Technical Reports Server (NTRS)

    Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.

    1995-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number n(sub i) of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts w(sub i) in the weighted LLSQ method when square root of n(sub i) instead of square root of bar-n(sub i) is used to approximate the uncertainties, sigma(sub i), in the data, where bar-n(sub i) = E(n(sub i)), the expected value of N(sub i). We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 ke

  6. A Nonlinear Adaptive Beamforming Algorithm Based on Least Squares Support Vector Regression

    PubMed Central

    Wang, Lutao; Jin, Gang; Li, Zhengzhou; Xu, Hongbin

    2012-01-01

    To overcome the performance degradation in the presence of steering vector mismatches, strict restrictions on the number of available snapshots, and numerous interferences, a novel beamforming approach based on nonlinear least-square support vector regression machine (LS-SVR) is derived in this paper. In this approach, the conventional linearly constrained minimum variance cost function used by minimum variance distortionless response (MVDR) beamformer is replaced by a squared-loss function to increase robustness in complex scenarios and provide additional control over the sidelobe level. Gaussian kernels are also used to obtain better generalization capacity. This novel approach has two highlights, one is a recursive regression procedure to estimate the weight vectors on real-time, the other is a sparse model with novelty criterion to reduce the final size of the beamformer. The analysis and simulation tests show that the proposed approach offers better noise suppression capability and achieve near optimal signal-to-interference-and-noise ratio (SINR) with a low computational burden, as compared to other recently proposed robust beamforming techniques.

  7. Improvement of structural models using covariance analysis and nonlinear generalized least squares

    NASA Technical Reports Server (NTRS)

    Glaser, R. J.; Kuo, C. P.; Wada, B. K.

    1992-01-01

    The next generation of large, flexible space structures will be too light to support their own weight, requiring a system of structural supports for ground testing. The authors have proposed multiple boundary-condition testing (MBCT), using more than one support condition to reduce uncertainties associated with the supports. MBCT would revise the mass and stiffness matrix, analytically qualifying the structure for operation in space. The same procedure is applicable to other common test conditions, such as empty/loaded tanks and subsystem/system level tests. This paper examines three techniques for constructing the covariance matrix required by nonlinear generalized least squares (NGLS) to update structural models based on modal test data. The methods range from a complicated approach used to generate the simulation data (i.e., the correct answer) to a diagonal matrix based on only two constants. The results show that NGLS is very insensitive to assumptions about the covariance matrix, suggesting that a workable NGLS procedure is possible. The examples also indicate that the multiple boundary condition procedure more accurately reduces errors than individual boundary condition tests alone.

  8. Depth estimation of face images using the nonlinear least-squares model.

    PubMed

    Sun, Zhan-Li; Lam, Kin-Man; Gao, Qing-Wei

    2013-01-01

    In this paper, we propose an efficient algorithm to reconstruct the 3D structure of a human face from one or more of its 2D images with different poses. In our algorithm, the nonlinear least-squares model is first employed to estimate the depth values of facial feature points and the pose of the 2D face image concerned by means of the similarity transform. Furthermore, different optimization schemes are presented with regard to the accuracy levels and the training time required. Our algorithm also embeds the symmetrical property of the human face into the optimization procedure, in order to alleviate the sensitivities arising from changes in pose. In addition, the regularization term, based on linear correlation, is added in the objective function to improve the estimation accuracy of the 3D structure. Further, a model-integration method is proposed to improve the depth-estimation accuracy when multiple nonfrontal-view face images are available. Experimental results on the 2D and 3D databases demonstrate the feasibility and efficiency of the proposed methods. PMID:22711771

  9. Evaluation of unconfined-aquifer parameters from pumping test data by nonlinear least squares

    USGS Publications Warehouse

    Heidari, M.; Moench, A.

    1997-01-01

    Nonlinear least squares (NLS) with automatic differentiation was used to estimate aquifer parameters from drawdown data obtained from published pumping tests conducted in homogeneous, water-table aquifers. The method is based on a technique that seeks to minimize the squares of residuals between observed and calculated drawdown subject to bounds that are placed on the parameter of interest. The analytical model developed by Neuman for flow to a partially penetrating well of infinitesimal diameter situated in an infinite, homogeneous and anisotropic aquifer was used to obtain calculated drawdown. NLS was first applied to synthetic drawdown data from a hypothetical but realistic aquifer to demonstrate that the relevant hydraulic parameters (storativity, specific yield, and horizontal and vertical hydraulic conductivity) can be evaluated accurately. Next the method was used to estimate the parameters at three field sites with widely varying hydraulic properties. NLS produced unbiased estimates of the aquifer parameters that are close to the estimates obtained with the same data using a visual curve-matching approach. Small differences in the estimates are a consequence of subjective interpretation introduced in the visual approach.

  10. Resimulation of noise: a precision estimator for least square error curve-fitting tested for axial strain time constant imaging.

    PubMed

    Nair, S P; Righetti, R

    2015-05-01

    Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.

  11. Signs of divided differences yield least squares data fitting with constrained monotonicity or convexity

    NASA Astrophysics Data System (ADS)

    Demetriou, I. C.

    2002-09-01

    Methods are presented for least squares data smoothing by using the signs of divided differences of the smoothed values. Professor M.J.D. Powell initiated the subject in the early 1980s and since then, theory, algorithms and FORTRAN software make it applicable to several disciplines in various ways. Let us consider n data measurements of a univariate function which have been altered by random errors. Then it is usual for the divided differences of the measurements to show sign alterations, which are probably due to data errors. We make the least sum of squares change to the measurements, by requiring the sequence of divided differences of order m to have at most q sign changes for some prescribed integer q. The positions of the sign changes are integer variables of the optimization calculation, which implies a combinatorial problem whose solution can require about O(nq) quadratic programming calculations in n variables and n-m constraints. Suitable methods have been developed for the following cases. It has been found that a dynamic programming procedure can calculate the global minimum for the important cases of piecewise monotonicity m=1,q[greater-or-equal, slanted]1 and piecewise convexity/concavity m=2,q[greater-or-equal, slanted]1 of the smoothed values. The complexity of the procedure in the case of m=1 is O(n2+qn log2 n) computer operations, while it is reduced to only O(n) when q=0 (monotonicity) and q=1 (increasing/decreasing monotonicity). The case m=2,q[greater-or-equal, slanted]1 requires O(qn2) computer operations and n2 quadratic programming calculations, which is reduced to one and n-2 quadratic programming calculations when m=2,q=0, i.e. convexity, and m=2,q=1, i.e. convexity/concavity, respectively. Unfortunately, the technique that receives this efficiency cannot generalize for the highly nonlinear case m[greater-or-equal, slanted]3,q[greater-or-equal, slanted]2. However, the case m[greater-or-equal, slanted]3,q=0 is solved by a special strictly

  12. Non-linear least squares analysis of phase diagrams for non-ideal binary mixtures of phospholipids.

    PubMed

    Brumbaugh, E E; Johnson, M L; Huang, C H

    1990-01-01

    A computer program for non-linear least squares minimization has been applied to construct temperature-composition phase diagrams for several binary systems of different phospholipids based on their calorimetric data. The calculated phase diagram is guided to fit the calorimetric data with two adjustable parameters that describe the non-ideal mixing of lipid components in the gel and liquid-crystalline phases. The parameter estimation procedure is presented to show that the computer program can be used not only to generate phase diagrams with characteristic shapes but also to numerically estimate the lipid-lipid pair interactions between the mixed and the like pairs in the two-dimensional plane of the bilayer in both the gel and liquid-crystalline states. The binary lipid systems examined include dimyristoylphosphatidylcholine/1-palmitoyl-2-stearoylphosphatidylchol ine, 1-capryl-2-behenoylphosphatidylcholine/1-behenoyl-2-lauro ylphosphatidylcholine, and 1-stearoyl-2-caprylphosphatidylcholine/dimyristoylphosphatidylchol ine.

  13. Total least squares fitting Michaelis-Menten enzyme kinetic model function

    NASA Astrophysics Data System (ADS)

    Jukic, Dragan; Sabo, Kristian; Scitovski, Rudolf

    2007-04-01

    The Michaelis-Menten enzyme kinetic model f(x;a,b)Dax/(b+x), a,b>0, is widely used in biochemistry, pharmacology, biology and medical research. Given the data (pi,xi,yi), i=1,...,m, m[greater-or-equal, slanted]3, we consider the total least squares (TLS) problem for the Michaelis-Menten modelE We show that it is possible that the TLS estimate does not exist. As the main result, we show that the TLS estimate exists if the data satisfy some natural conditions. Some numerical examples are included.

  14. A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting.

    PubMed

    Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue

    2016-07-29

    The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals.

  15. A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting

    PubMed Central

    Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue

    2016-01-01

    The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals. PMID:27483275

  16. A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting.

    PubMed

    Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue

    2016-01-01

    The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals. PMID:27483275

  17. A non-linear least-squares approach to the refinement of all parameters involved in acid-base titrations.

    PubMed

    Arena, G; Rizzarelli, E; Sammartano, S; Rigano, C

    1979-01-01

    A non-linear least-squares computer program has been written for the refinement of the parameters involved in potentiometric acid-base titrations. The program ACBA (ACid-BAse titrations) is applicable under quite general conditions to solutions containing one or more acids or bases. The method of refinement used gives the program several advantages over the other programs described previously.

  18. Nonlinear Least-Squares Using Microcomputer Data Analysis Programs: KaleidaGraph(TM) in the Physical Chemistry Teaching Laboratory

    NASA Astrophysics Data System (ADS)

    Tellinghuisen, Joel

    2000-09-01

    A scientific data analysis and presentation program (KaleidaGraph, Synergy Software, available for both Macintosh and Windows platforms) is adopted as the main data analysis tool in the undergraduate physical chemistry teaching laboratory. The capabilities of this program (and others of this type) are illustrated with application to some common data analysis problems in the laboratory curriculum, most of which require nonlinear least-squares treatments. The examples include (i) a straight line through the origin, and its transformation into a weighted average; (ii) a declining exponential with a background, with application to first-order kinetics data; (iii) the analysis of vapor pressure data by both unweighted fitting to an exponential form and weighted fitting to a linear logarithmic relationship; (iv) the analysis of overlapped spectral lines as sums of Gaussians, with application to the H/D atomic spectrum; (v) the direct fitting of IR rotation-vibration spectral line positions (HCl); and (vi) a two-function model for bomb calorimetry temperature vs time data. Procedures for implementing the use of such software in the teaching laboratory are discussed.

  19. Computer program for fitting low-order polynomial splines by method of least squares

    NASA Technical Reports Server (NTRS)

    Smith, P. J.

    1972-01-01

    FITLOS is computer program which implements new curve fitting technique. Main program reads input data, calls appropriate subroutines for curve fitting, calculates statistical analysis, and writes output data. Method was devised as result of need to suppress noise in calibration of multiplier phototube capacitors.

  20. An evaluation of least-squares fitting methods in XAFS spectroscopy: iron-based SBA-15 catalyst formulations.

    PubMed

    Huggins, Frank E; Kim, Dae-Jung; Dunn, Brian C; Eyring, Edward M; Huffman, Gerald P

    2009-06-01

    A detailed comparison has been made of determinations by (57)Fe Mössbauer spectroscopy and four different XAFS spectroscopic methods of %Fe as hematite and ferrihydrite in 11 iron-based SBA-15 catalyst formulations. The four XAFS methods consisted of least-squares fitting of iron XANES, d(XANES)/dE, and EXAFS (k(3)chi and k(2)chi) spectra to the corresponding standard spectra of hematite and ferrihydrite. The comparison showed that, for this particular application, the EXAFS methods were superior to the XANES methods in reproducing the results of the benchmark Mössbauer method in large part because the EXAFS spectra of the two iron-oxide standards were much less correlated than the corresponding XANES spectra. Furthermore, the EXAFS and Mössbauer results could be made completely consistent by inclusion of a factor of 1.3+/-0.05 for the ratio of the Mössbauer recoilless fraction of hematite relative to that of ferrihydrite at room temperature (293K). This difference in recoilless fraction is attributed to the nanoparticle nature of the ferrihydrite compared to the bulk nature of the hematite. Also discussed are possible alternative non-least-squares XAFS methods for determining the iron speciation in this application as well as criteria for deciding whether or not least-squares XANES methods should be applied for the determination of element speciation in unknown materials. PMID:19185532

  1. A component prediction method for flue gas of natural gas combustion based on nonlinear partial least squares method.

    PubMed

    Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun

    2014-01-01

    Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness. PMID:24772020

  2. A component prediction method for flue gas of natural gas combustion based on nonlinear partial least squares method.

    PubMed

    Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun

    2014-01-01

    Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.

  3. A Component Prediction Method for Flue Gas of Natural Gas Combustion Based on Nonlinear Partial Least Squares Method

    PubMed Central

    Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun

    2014-01-01

    Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness. PMID:24772020

  4. A Global Approach to Accurate and Automatic Quantitative Analysis of NMR Spectra by Complex Least-Squares Curve Fitting

    NASA Astrophysics Data System (ADS)

    Martin, Y. L.

    The performance of quantitative analysis of 1D NMR spectra depends greatly on the choice of the NMR signal model. Complex least-squares analysis is well suited for optimizing the quantitative determination of spectra containing a limited number of signals (<30) obtained under satisfactory conditions of signal-to-noise ratio (>20). From a general point of view it is concluded, on the basis of mathematical considerations and numerical simulations, that, in the absence of truncation of the free-induction decay, complex least-squares curve fitting either in the time or in the frequency domain and linear-prediction methods are in fact nearly equivalent and give identical results. However, in the situation considered, complex least-squares analysis in the frequency domain is more flexible since it enables the quality of convergence to be appraised at every resonance position. An efficient data-processing strategy has been developed which makes use of an approximate conjugate-gradient algorithm. All spectral parameters (frequency, damping factors, amplitudes, phases, initial delay associated with intensity, and phase parameters of a baseline correction) are simultaneously managed in an integrated approach which is fully automatable. The behavior of the error as a function of the signal-to-noise ratio is theoretically estimated, and the influence of apodization is discussed. The least-squares curve fitting is theoretically proved to be the most accurate approach for quantitative analysis of 1D NMR data acquired with reasonable signal-to-noise ratio. The method enables complex spectral residuals to be sorted out. These residuals, which can be cumulated thanks to the possibility of correcting for frequency shifts and phase errors, extract systematic components, such as isotopic satellite lines, and characterize the shape and the intensity of the spectral distortion with respect to the Lorentzian model. This distortion is shown to be nearly independent of the chemical species

  5. An integration method for scanned multi-view range images (MRIs) based on local weighted least squares (LWLS) surface fitting

    NASA Astrophysics Data System (ADS)

    Shi, Bao-Quan; Liang, Jin

    2016-02-01

    We present a novel integration method that can fuse registered partially overlapping multi-view range images (MRIs) into a single-layer, smooth and detailed point set surface. A maximum likelihood criterion is developed to detect overlapping points in MRIs. Subsequently, the detected overlapping points are shifted onto a series of piecewise smooth local weighted least squares (LWLS) surfaces to remove bad influence of scanning noises, outliers and large gaps/registration errors. The LWLS surface is fitted in background neighborhood which contains sufficient information to reconstruct local surface accurately. And the shifting operation is done in a concentric tiny neighborhood which contains corresponding overlapping points. Finally, a simple procedure is designed to identify and merge those corresponding overlapping points. The novel method has the advantages of robust to large gaps/registration errors, possessing least squares means and uniform density distribution. Furthermore, the novel method is efficient since only overlapping points are processed and the non-overlapping points are remained as they are. Several state of the art integration methods were employed for comparison study and the experimental results demonstrate the superiority of the novel method.

  6. Using nonlinear least squares to assess relative expression and its uncertainty in real-time qPCR studies.

    PubMed

    Tellinghuisen, Joel

    2016-03-01

    Relative expression ratios are commonly estimated in real-time qPCR studies by comparing the quantification cycle for the target gene with that for a reference gene in the treatment samples, normalized to the same quantities determined for a control sample. For the "standard curve" design, where data are obtained for all four of these at several dilutions, nonlinear least squares can be used to assess the amplification efficiencies (AE) and the adjusted ΔΔCq and its uncertainty, with automatic inclusion of the effect of uncertainty in the AEs. An algorithm is illustrated for the KaleidaGraph program. PMID:26562324

  7. Simultaneous estimation of plasma parameters from spectroscopic data of neutral helium using least square fitting of CR-model

    NASA Astrophysics Data System (ADS)

    Jain, Jalaj; Prakash, Ram; Vyas, Gheesa Lal; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana; Halder, Nilanjan; Choyal, Yaduvendra

    2015-12-01

    In the present work an effort has been made to estimate the plasma parameters simultaneously like—electron density, electron temperature, ground state atom density, ground state ion density and metastable state density from the observed visible spectra of penning plasma discharge (PPD) source using least square fitting. The analysis is performed for the prominently observed neutral helium lines. The atomic data and analysis structure (ADAS) database is used to provide the required collisional-radiative (CR) photon emissivity coefficients (PECs) values under the optical thin plasma condition in the analysis. With this condition the estimated plasma temperature from the PPD is found rather high. It is seen that the inclusion of opacity in the observed spectral lines through PECs and addition of diffusion of neutrals and metastable state species in the CR-model code analysis improves the electron temperature estimation in the simultaneous measurement.

  8. Quasi Matrix Free Preconditioners in Optimization and Nonlinear Least-Squares

    NASA Astrophysics Data System (ADS)

    Bellavia, Stefania; Bertaccini, Daniele; Morini, Benedetta

    2010-09-01

    The approximate solution of several nonlinear optimization problems requires solving sequences of symmetric linear systems. When the number of variables is large, it is advisable to use an iterative linear solver for the Newton correction step. On the other hand, the underlying linear solver can converge slowly and the calculation of a preconditioner requires the computation of the Hessian matrix which usually represents a major task in the implementation. We propose here a way to overcome at least in part this two preconditioning issue.

  9. Multi-Gaussian fitting for pulse waveform using Weighted Least Squares and multi-criteria decision making method.

    PubMed

    Wang, Lu; Xu, Lisheng; Feng, Shuting; Meng, Max Q-H; Wang, Kuanquan

    2013-11-01

    Analysis of pulse waveform is a low cost, non-invasive method for obtaining vital information related to the conditions of the cardiovascular system. In recent years, different Pulse Decomposition Analysis (PDA) methods have been applied to disclose the pathological mechanisms of the pulse waveform. All these methods decompose single-period pulse waveform into a constant number (such as 3, 4 or 5) of individual waves. Furthermore, those methods do not pay much attention to the estimation error of the key points in the pulse waveform. The estimation of human vascular conditions depends on the key points' positions of pulse wave. In this paper, we propose a Multi-Gaussian (MG) model to fit real pulse waveforms using an adaptive number (4 or 5 in our study) of Gaussian waves. The unknown parameters in the MG model are estimated by the Weighted Least Squares (WLS) method and the optimized weight values corresponding to different sampling points are selected by using the Multi-Criteria Decision Making (MCDM) method. Performance of the MG model and the WLS method has been evaluated by fitting 150 real pulse waveforms of five different types. The resulting Normalized Root Mean Square Error (NRMSE) was less than 2.0% and the estimation accuracy for the key points was satisfactory, demonstrating that our proposed method is effective in compressing, synthesizing and analyzing pulse waveforms.

  10. Multi-Gaussian fitting for pulse waveform using Weighted Least Squares and multi-criteria decision making method.

    PubMed

    Wang, Lu; Xu, Lisheng; Feng, Shuting; Meng, Max Q-H; Wang, Kuanquan

    2013-11-01

    Analysis of pulse waveform is a low cost, non-invasive method for obtaining vital information related to the conditions of the cardiovascular system. In recent years, different Pulse Decomposition Analysis (PDA) methods have been applied to disclose the pathological mechanisms of the pulse waveform. All these methods decompose single-period pulse waveform into a constant number (such as 3, 4 or 5) of individual waves. Furthermore, those methods do not pay much attention to the estimation error of the key points in the pulse waveform. The estimation of human vascular conditions depends on the key points' positions of pulse wave. In this paper, we propose a Multi-Gaussian (MG) model to fit real pulse waveforms using an adaptive number (4 or 5 in our study) of Gaussian waves. The unknown parameters in the MG model are estimated by the Weighted Least Squares (WLS) method and the optimized weight values corresponding to different sampling points are selected by using the Multi-Criteria Decision Making (MCDM) method. Performance of the MG model and the WLS method has been evaluated by fitting 150 real pulse waveforms of five different types. The resulting Normalized Root Mean Square Error (NRMSE) was less than 2.0% and the estimation accuracy for the key points was satisfactory, demonstrating that our proposed method is effective in compressing, synthesizing and analyzing pulse waveforms. PMID:24209911

  11. Estimation of the free core nutation period by the sliding-window complex least-squares fit method

    NASA Astrophysics Data System (ADS)

    Zhou, Yonghong; Zhu, Qiang; Salstein, David A.; Xu, Xueqing; Shi, Si; Liao, Xinhao

    2016-05-01

    Estimation of the free core nutation (FCN) period is a challenging prospect. Mostly, two methods, one direct and one indirect, have been applied in the past to address the problem by analyzing the Earth orientation parameters observed by the very long baseline interferometry. The indirect method estimates the FCN period from resonance effects of the FCN on forced nutation terms, whereas the direct method estimates the FCN period using the Fourier Transform (FT) approach. However, the FCN period estimated by the direct FT technique suffers from the non-stationary characteristics of celestial pole offsets (CPO). In this study, the FCN period is estimated by another direct method, i.e., the sliding-window complex least-squares fit method (SCLF). The estimated values of the FCN period for the full set of 1984.0-2014.0 and four subsets (1984.0-2000.0, 2000.0-2014.0, 1984.0-1990.0, 1990.0-2014.0) range from -428.8 to -434.3 mean solar days. From the FT to the SCLF method, the estimate uncertainty of the FCN period falls from several tens of days to several days. Thus, the SCLF method may serve as an independent direct way to estimate the FCN period, complementing and validating the indirect resonance method that has been frequently used before.

  12. Nonlinear Least Squares Method for Gyros Bias and Attitude Estimation Using Satellite Attitude and Orbit Toolbox for Matlab

    NASA Astrophysics Data System (ADS)

    Silva, W. R.; Kuga, H. K.; Zanardi, M. C.

    2015-10-01

    The knowledge of the attitude determination is essential to the safety and control of the satellite and payload, and this involves approaches of nonlinear estimation techniques. Here one focuses on determining the attitude and the gyros drift of a real satellite CBERS-2 (China Brazil Earth Resources Satellite) using simulated measurements provided by propagator PROPAT Satellite Attitude and Orbit Toolbox for Matlab. The method used for the estimation was the Nonlinear Least Squares Estimation (NLSE). The attitude dynamical model is described by nonlinear equations involving the Euler angles. The attitude sensors available are two DSS (Digital Sun Sensor), two IRES (Infra-Red Earth Sensor), and one triad of mechanical gyros. The two IRES give direct measurements of roll and pitch angles with a certain level of error. The two DSS are nonlinear functions of roll, pitch, and yaw attitude angles. Gyros are very important sensors, as they provide direct incremental angles or angular velocities. However gyros present several sources of error of which the drift is the most troublesome. Results show that one can reach accuracies in attitude determination within the prescribed requirements, besides providing estimates of the gyro drifts which can be further used to enhance the gyro error model.

  13. Nonlinear Least-Squares Based Method for Identifying and Quantifying Single and Mixed Contaminants in Air with an Electronic Nose

    PubMed Central

    Zhou, Hanying; Homer, Margie L.; Shevade, Abhijit V.; Ryan, Margaret A.

    2006-01-01

    The Jet Propulsion Laboratory has recently developed and built an electronic nose (ENose) using a polymer-carbon composite sensing array. This ENose is designed to be used for air quality monitoring in an enclosed space, and is designed to detect, identify and quantify common contaminants at concentrations in the parts-per-million range. Its capabilities were demonstrated in an experiment aboard the National Aeronautics and Space Administration's Space Shuttle Flight STS-95. This paper describes a modified nonlinear least-squares based algorithm developed to analyze data taken by the ENose, and its performance for the identification and quantification of single gases and binary mixtures of twelve target analytes in clean air. Results from laboratory-controlled events demonstrate the effectiveness of the algorithm to identify and quantify a gas event if concentration exceeds the ENose detection threshold. Results from the flight test demonstrate that the algorithm correctly identifies and quantifies all registered events (planned or unplanned, as singles or mixtures) with no false positives and no inconsistencies with the logged events and the independent analysis of air samples.

  14. Method of Characteristics Calculations and Computer Code for Materials with Arbitrary Equations of State and Using Orthogonal Polynomial Least Square Surface Fits

    NASA Technical Reports Server (NTRS)

    Chang, T. S.

    1974-01-01

    A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.

  15. Spreadsheet Method for Evaluation of Biochemical Reaction Rate Coefficients and Their Uncertainties by Weighted Nonlinear Least-Squares Analysis of the Integrated Monod Equation

    PubMed Central

    Smith, Laurence H.; McCarty, Perry L.; Kitanidis, Peter K.

    1998-01-01

    A convenient method for evaluation of biochemical reaction rate coefficients and their uncertainties is described. The motivation for developing this method was the complexity of existing statistical methods for analysis of biochemical rate equations, as well as the shortcomings of linear approaches, such as Lineweaver-Burk plots. The nonlinear least-squares method provides accurate estimates of the rate coefficients and their uncertainties from experimental data. Linearized methods that involve inversion of data are unreliable since several important assumptions of linear regression are violated. Furthermore, when linearized methods are used, there is no basis for calculation of the uncertainties in the rate coefficients. Uncertainty estimates are crucial to studies involving comparisons of rates for different organisms or environmental conditions. The spreadsheet method uses weighted least-squares analysis to determine the best-fit values of the rate coefficients for the integrated Monod equation. Although the integrated Monod equation is an implicit expression of substrate concentration, weighted least-squares analysis can be employed to calculate approximate differences in substrate concentration between model predictions and data. An iterative search routine in a spreadsheet program is utilized to search for the best-fit values of the coefficients by minimizing the sum of squared weighted errors. The uncertainties in the best-fit values of the rate coefficients are calculated by an approximate method that can also be implemented in a spreadsheet. The uncertainty method can be used to calculate single-parameter (coefficient) confidence intervals, degrees of correlation between parameters, and joint confidence regions for two or more parameters. Example sets of calculations are presented for acetate utilization by a methanogenic mixed culture and trichloroethylene cometabolism by a methane-oxidizing mixed culture. An additional advantage of application of this

  16. Beam-hardening correction by a surface fitting and phase classification by a least square support vector machine approach for tomography images of geological samples

    NASA Astrophysics Data System (ADS)

    Khan, F.; Enzmann, F.; Kersten, M.

    2015-12-01

    In X-ray computed microtomography (μXCT) image processing is the most important operation prior to image analysis. Such processing mainly involves artefact reduction and image segmentation. We propose a new two-stage post-reconstruction procedure of an image of a geological rock core obtained by polychromatic cone-beam μXCT technology. In the first stage, the beam-hardening (BH) is removed applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. The final BH-corrected image is extracted from the residual data, or the difference between the surface elevation values and the original grey-scale values. For the second stage, we propose using a least square support vector machine (a non-linear classifier algorithm) to segment the BH-corrected data as a pixel-based multi-classification task. A combination of the two approaches was used to classify a complex multi-mineral rock sample. The Matlab code for this approach is provided in the Appendix. A minor drawback is that the proposed segmentation algorithm may become computationally demanding in the case of a high dimensional training data set.

  17. Least Squares Best Fit Method for the Three Parameter Weibull Distribution: Analysis of Tensile and Bend Specimens with Volume or Surface Flaw Failure

    NASA Technical Reports Server (NTRS)

    Gross, Bernard

    1996-01-01

    Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.

  18. Bayesian least squares deconvolution

    NASA Astrophysics Data System (ADS)

    Asensio Ramos, A.; Petit, P.

    2015-11-01

    Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  19. Smooth Particle Hydrodynamics with nonlinear Moving-Least-Squares WENO reconstruction to model anisotropic dispersion in porous media

    NASA Astrophysics Data System (ADS)

    Avesani, Diego; Herrera, Paulo; Chiogna, Gabriele; Bellin, Alberto; Dumbser, Michael

    2015-06-01

    Most numerical schemes applied to solve the advection-diffusion equation are affected by numerical diffusion. Moreover, unphysical results, such as oscillations and negative concentrations, may emerge when an anisotropic dispersion tensor is used, which induces even more severe errors in the solution of multispecies reactive transport. To cope with this long standing problem we propose a modified version of the standard Smoothed Particle Hydrodynamics (SPH) method based on a Moving-Least-Squares-Weighted-Essentially-Non-Oscillatory (MLS-WENO) reconstruction of concentrations. This scheme formulation (called MWSPH) approximates the diffusive fluxes with a Rusanov-type Riemann solver based on high order WENO scheme. We compare the standard SPH with the MWSPH for different a few test cases, considering both homogeneous and heterogeneous flow fields and different anisotropic ratios of the dispersion tensor. We show that, MWSPH is stable and accurate and that it reduces the occurrence of negative concentrations compared to standard SPH. When negative concentrations are observed, their absolute values are several orders of magnitude smaller compared to standard SPH. In addition, MWSPH limits spurious oscillations in the numerical solution more effectively than classical SPH. Convergence analysis shows that MWSPH is computationally more demanding than SPH, but with the payoff a more accurate solution, which in addition is less sensitive to particles position. The latter property simplifies the time consuming and often user dependent procedure to define the initial dislocation of the particles.

  20. Power spectrum analysis with least-squares fitting: amplitude bias and its elimination, with application to optical tweezers and atomic force microscope cantilevers.

    PubMed

    Nørrelykke, Simon F; Flyvbjerg, Henrik

    2010-07-01

    Optical tweezers and atomic force microscope (AFM) cantilevers are often calibrated by fitting their experimental power spectra of Brownian motion. We demonstrate here that if this is done with typical weighted least-squares methods, the result is a bias of relative size between -2/n and +1/n on the value of the fitted diffusion coefficient. Here, n is the number of power spectra averaged over, so typical calibrations contain 10%-20% bias. Both the sign and the size of the bias depend on the weighting scheme applied. Hence, so do length-scale calibrations based on the diffusion coefficient. The fitted value for the characteristic frequency is not affected by this bias. For the AFM then, force measurements are not affected provided an independent length-scale calibration is available. For optical tweezers there is no such luck, since the spring constant is found as the ratio of the characteristic frequency and the diffusion coefficient. We give analytical results for the weight-dependent bias for the wide class of systems whose dynamics is described by a linear (integro)differential equation with additive noise, white or colored. Examples are optical tweezers with hydrodynamic self-interaction and aliasing, calibration of Ornstein-Uhlenbeck models in finance, models for cell migration in biology, etc. Because the bias takes the form of a simple multiplicative factor on the fitted amplitude (e.g. the diffusion coefficient), it is straightforward to remove and the user will need minimal modifications to his or her favorite least-squares fitting programs. Results are demonstrated and illustrated using synthetic data, so we can compare fits with known true values. We also fit some commonly occurring power spectra once-and-for-all in the sense that we give their parameter values and associated error bars as explicit functions of experimental power-spectral values.

  1. Uncertainty in least-squares fits to the thermal noise spectra of nanomechanical resonators with applications to the atomic force microscope.

    PubMed

    Sader, John E; Yousefi, Morteza; Friend, James R

    2014-02-01

    Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noise spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.

  2. Uncertainty in least-squares fits to the thermal noise spectra of nanomechanical resonators with applications to the atomic force microscope

    SciTech Connect

    Sader, John E.; Yousefi, Morteza; Friend, James R.

    2014-02-15

    Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noise spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.

  3. Three Perspectives on Teaching Least Squares

    ERIC Educational Resources Information Center

    Scariano, Stephen M.; Calzada, Maria

    2004-01-01

    The method of Least Squares is the most widely used technique for fitting a straight line to data, and it is typically discussed in several undergraduate courses. This article focuses on three developmentally different approaches for solving the Least Squares problem that are suitable for classroom exposition.

  4. Using Least Squares for Error Propagation

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2015-01-01

    The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…

  5. [Spectral quantitative analysis by nonlinear partial least squares based on neural network internal model for flue gas of thermal power plant].

    PubMed

    Cao, Hui; Li, Yao-Jiang; Zhou, Yan; Wang, Yan-Xia

    2014-11-01

    To deal with nonlinear characteristics of spectra data for the thermal power plant flue, a nonlinear partial least square (PLS) analysis method with internal model based on neural network is adopted in the paper. The latent variables of the independent variables and the dependent variables are extracted by PLS regression firstly, and then they are used as the inputs and outputs of neural network respectively to build the nonlinear internal model by train process. For spectra data of flue gases of the thermal power plant, PLS, the nonlinear PLS with the internal model of back propagation neural network (BP-NPLS), the non-linear PLS with the internal model of radial basis function neural network (RBF-NPLS) and the nonlinear PLS with the internal model of adaptive fuzzy inference system (ANFIS-NPLS) are compared. The root mean square error of prediction (RMSEP) of sulfur dioxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 16.96%, 16.60% and 19.55% than that of PLS, respectively. The RMSEP of nitric oxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 8.60%, 8.47% and 10.09% than that of PLS, respectively. The RMSEP of nitrogen dioxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 2.11%, 3.91% and 3.97% than that of PLS, respectively. Experimental results show that the nonlinear PLS is more suitable for the quantitative analysis of glue gas than PLS. Moreover, by using neural network function which can realize high approximation of nonlinear characteristics, the nonlinear partial least squares method with internal model mentioned in this paper have well predictive capabilities and robustness, and could deal with the limitations of nonlinear partial least squares method with other internal model such as polynomial and spline functions themselves under a certain extent. ANFIS-NPLS has the best performance with the internal model of adaptive fuzzy inference system having ability to learn more and reduce the residuals effectively. Hence, ANFIS-NPLS is an

  6. Complementary use of partial least-squares and artificial neural networks for the non-linear spectrophotometric analysis of pharmaceutical samples.

    PubMed

    Goicoechea, Héctor C; Collado, María S; Satuf, María L; Olivieri, Alejandro C

    2002-10-01

    The complementary use of partial least-squares (PLS) multivariate calibration and artificial neural networks (ANNs) for the simultaneous spectrophotometric determination of three active components in a pharmaceutical formulation has been explored. The presence of non-linearities caused by chemical interactions was confirmed by a recently discussed methodology based on Mallows augmented partial residual plots. Ternary mixtures of chlorpheniramine, naphazoline and dexamethasone in a matrix of excipients have been resolved by using PLS for the two major analytes (chlorpheniramine and naphazoline) and ANNs for the minor one (dexamethasone). Notwithstanding the large number of constituents, their high degree of spectral overlap and the occurrence of non-linearities, rapid and simultaneous analysis has been achieved, with reasonably good accuracy and precision. No extraction procedures using non-aqueous solvents are required.

  7. Interpreting Least Squares without Sampling Assumptions.

    ERIC Educational Resources Information Center

    Beaton, Albert E.

    Least squares fitting process as a method of data reduction is presented. The general strategy is to consider fitting (linear) models as partitioning data into a fit and residuals. The fit can be parsimoniously represented by a summary of the data. A fit is considered adequate if the residuals are small enough so that manipulating their signs and…

  8. Interpolating moving least-squares methods for fitting potential energy surfaces : computing high-density potential energy surface data from low-density ab initio data points.

    SciTech Connect

    Dawes, R.; Thompson, D. L.; Guo, Y.; Wagner, A. F.; Minkoff, M.; Chemistry; Univ. of Missouri-Columbia; Oklahoma State Univ.

    2007-05-11

    A highly accurate and efficient method for molecular global potential energy surface (PES) construction and fitting is demonstrated. An interpolating-moving-least-squares (IMLS)-based method is developed using low-density ab initio Hessian values to compute high-density PES parameters suitable for accurate and efficient PES representation. The method is automated and flexible so that a PES can be optimally generated for classical trajectories, spectroscopy, or other applications. Two important bottlenecks for fitting PESs are addressed. First, high accuracy is obtained using a minimal density of ab initio points, thus overcoming the bottleneck of ab initio point generation faced in applications of modified-Shepard-based methods. Second, high efficiency is also possible (suitable when a huge number of potential energy and gradient evaluations are required during a trajectory calculation). This overcomes the bottleneck in high-order IMLS-based methods, i.e., the high cost/accuracy ratio for potential energy evaluations. The result is a set of hybrid IMLS methods in which high-order IMLS is used with low-density ab initio Hessian data to compute a dense grid of points at which the energy, Hessian, or even high-order IMLS fitting parameters are stored. A series of hybrid methods is then possible as these data can be used for neural network fitting, modified-Shepard interpolation, or approximate IMLS. Results that are indicative of the accuracy, efficiency, and scalability are presented for one-dimensional model potentials as well as for three-dimensional (HCN) and six-dimensional (HOOH) molecular PESs

  9. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies

    PubMed Central

    Essa, Khalid S.

    2013-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472

  10. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies.

    PubMed

    Essa, Khalid S

    2014-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values.

  11. Application of genetic algorithm-kernel partial least square as a novel nonlinear feature selection method: activity of carbonic anhydrase II inhibitors.

    PubMed

    Jalali-Heravi, Mehdi; Kyani, Anahita

    2007-05-01

    This paper introduces the genetic algorithm-kernel partial least square (GA-KPLS), as a novel nonlinear feature selection method. This technique combines genetic algorithms (GAs) as powerful optimization methods with KPLS as a robust nonlinear statistical method for variable selection. This feature selection method is combined with artificial neural network to develop a nonlinear QSAR model for predicting activities of a series of substituted aromatic sulfonamides as carbonic anhydrase II (CA II) inhibitors. Eight simple one- and two-dimensional descriptors were selected by GA-KPLS and considered as inputs for developing artificial neural networks (ANNs). These parameters represent the role of acceptor-donor pair, hydrogen bonding, hydrosolubility and lipophilicity of the active sites and also the size of the inhibitors on inhibitor-isozyme interaction. The accuracy of 8-4-1 networks was illustrated by validation techniques of leave-one-out (LOO) and leave-multiple-out (LMO) cross-validations and Y-randomization. Superiority of this method (GA-KPLS-ANN) over the linear one (MLR) in a previous work and also the GA-PLS-ANN in which a linear feature selection method has been used indicates that the GA-KPLS approach is a powerful method for the variable selection in nonlinear systems. PMID:17316919

  12. Hierarchical Cluster-based Partial Least Squares Regression (HC-PLSR) is an efficient tool for metamodelling of nonlinear dynamic models

    PubMed Central

    2011-01-01

    Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. Conclusions HC

  13. Speciation of Energy Critical Elements in Marine Ferromanganese Crusts and Nodules by Principal Component Analysis and Least-squares fits to XAFS Spectra

    NASA Astrophysics Data System (ADS)

    Foster, A. L.; Klofas, J. M.; Hein, J. R.; Koschinsky, A.; Bargar, J.; Dunham, R. E.; Conrad, T. A.

    2011-12-01

    Marine ferromanganese crusts and nodules ("Fe-Mn crusts") are considered a potential mineral resource due to their accumulation of several economically-important elements at concentrations above mean crustal abundances. They are typically composed of intergrown Fe oxyhydroxide and Mn oxide; thicker (older) crusts can also contain carbonate fluorapatite. We used X-ray absorption fine-structure (XAFS) spectroscopy, a molecular-scale structure probe, to determine the speciation of several elements (Te, Bi, Mo, Zr, Pt) in Fe-Mn crusts. As a first step in analysis of this dataset, we have conducted principal component analysis (PCA) of Te K-edge and Mo K-edge, k3-weighted XAFS spectra. The sample set consisted of 12 homogenized, ground Fe-Mn crust samples from 8 locations in the global ocean. One sample was subjected to a chemical leach to selectively remove Mn oxides and the elements associated with it. The samples in the study set contain 50-205 mg/kg Te (average = 88) and 97-802 mg/kg Mo (average = 567). PCAs of background-subtracted, normalized Te K-edge and Mo K-edge XAFS spectra were performed on a data matrix of 12 rows x 122 columns (rows = samples; columns = Te or Mo fluorescence value at each energy step) and results were visualized without rotation. The number of significant components was assessed by the Malinowski indicator function and ability of the components to reconstruct the features (minus noise) of all sample spectra. Two components were significant by these criteria for both Te and Mo PCAs and described a total of 74 and 75% of the total variance, respectively. Reconstruction of potential model compounds by the principal components derived from PCAs on the sample set ("target transformation") provides a means of ranking models in terms of their utility for subsequent linear-combination, least-squares (LCLS) fits (the next step of data analysis). Synthetic end-member models of Te4+, Te6+, and Mo adsorbed to Fe(III) oxyhydroxide and Mn oxide were

  14. Non-linear partial least square regression increases the estimation accuracy of grass nitrogen and phosphorus using in situ hyperspectral and environmental data

    NASA Astrophysics Data System (ADS)

    Ramoelo, A.; Skidmore, A. K.; Cho, M. A.; Mathieu, R.; Heitkönig, I. M. A.; Dudeni-Tlhone, N.; Schlerf, M.; Prins, H. H. T.

    2013-08-01

    Grass nitrogen (N) and phosphorus (P) concentrations are direct indicators of rangeland quality and provide imperative information for sound management of wildlife and livestock. It is challenging to estimate grass N and P concentrations using remote sensing in the savanna ecosystems. These areas are diverse and heterogeneous in soil and plant moisture, soil nutrients, grazing pressures, and human activities. The objective of the study is to test the performance of non-linear partial least squares regression (PLSR) for predicting grass N and P concentrations through integrating in situ hyperspectral remote sensing and environmental variables (climatic, edaphic and topographic). Data were collected along a land use gradient in the greater Kruger National Park region. The data consisted of: (i) in situ-measured hyperspectral spectra, (ii) environmental variables and measured grass N and P concentrations. The hyperspectral variables included published starch, N and protein spectral absorption features, red edge position, narrow-band indices such as simple ratio (SR) and normalized difference vegetation index (NDVI). The results of the non-linear PLSR were compared to those of conventional linear PLSR. Using non-linear PLSR, integrating in situ hyperspectral and environmental variables yielded the highest grass N and P estimation accuracy (R2 = 0.81, root mean square error (RMSE) = 0.08, and R2 = 0.80, RMSE = 0.03, respectively) as compared to using remote sensing variables only, and conventional PLSR. The study demonstrates the importance of an integrated modeling approach for estimating grass quality which is a crucial effort towards effective management and planning of protected and communal savanna ecosystems.

  15. Solution of a few nonlinear problems in aerodynamics by the finite elements and functional least squares methods. Ph.D. Thesis - Paris Univ.; [mathematical models of transonic flow using nonlinear equations

    NASA Technical Reports Server (NTRS)

    Periaux, J.

    1979-01-01

    The numerical simulation of the transonic flows of idealized fluids and of incompressible viscous fluids, by the nonlinear least squares methods is presented. The nonlinear equations, the boundary conditions, and the various constraints controlling the two types of flow are described. The standard iterative methods for solving a quasi elliptical nonlinear equation with partial derivatives are reviewed with emphasis placed on two examples: the fixed point method applied to the Gelder functional in the case of compressible subsonic flows and the Newton method used in the technique of decomposition of the lifting potential. The new abstract least squares method is discussed. It consists of substituting the nonlinear equation by a problem of minimization in a H to the minus 1 type Sobolev functional space.

  16. On the Least-Squares Fitting of Slater-Type Orbitals with Gaussians: Reproduction of the STO-NG Fits Using Microsoft Excel and Maple

    ERIC Educational Resources Information Center

    Pye, Cory C.; Mercer, Colin J.

    2012-01-01

    The symbolic algebra program Maple and the spreadsheet Microsoft Excel were used in an attempt to reproduce the Gaussian fits to a Slater-type orbital, required to construct the popular STO-NG basis sets. The successes and pitfalls encountered in such an approach are chronicled. (Contains 1 table and 3 figures.)

  17. Fixed memory least squares filtering

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.

    1975-01-01

    Buxbaum has reported on three algorithms for computing least squares estimates that are based on fixed amounts of data. In this correspondence, the filter is arranged as a point-deleting Kalman filter concatenated with the standard point-inclusion Kalman filter. The resulting algorithm is couched in a square root framework for greater numerical stability, and special attention is given to computer implementation.

  18. Theoretical foundation of the balanced minimum evolution method of phylogenetic inference and its relationship to weighted least-squares tree fitting.

    PubMed

    Desper, Richard; Gascuel, Olivier

    2004-03-01

    Due to its speed, the distance approach remains the best hope for building phylogenies on very large sets of taxa. Recently (R. Desper and O. Gascuel, J. Comp. Biol. 9:687-705, 2002), we introduced a new "balanced" minimum evolution (BME) principle, based on a branch length estimation scheme of Y. Pauplin (J. Mol. Evol. 51:41-47, 2000). Initial simulations suggested that FASTME, our program implementing the BME principle, was more accurate than or equivalent to all other distance methods we tested, with running time significantly faster than Neighbor-Joining (NJ). This article further explores the properties of the BME principle, and it explains and illustrates its impressive topological accuracy. We prove that the BME principle is a special case of the weighted least-squares approach, with biologically meaningful variances of the distance estimates. We show that the BME principle is statistically consistent. We demonstrate that FASTME only produces trees with positive branch lengths, a feature that separates this approach from NJ (and related methods) that may produce trees with branches with biologically meaningless negative lengths. Finally, we consider a large simulated data set, with 5,000 100-taxon trees generated by the Aldous beta-splitting distribution encompassing a range of distributions from Yule-Harding to uniform, and using a covarion-like model of sequence evolution. FASTME produces trees faster than NJ, and much faster than WEIGHBOR and the weighted least-squares implementation of PAUP*. Moreover, FASTME trees are consistently more accurate at all settings, ranging from Yule-Harding to uniform distributions, and all ranges of maximum pairwise divergence and departure from molecular clock. Interestingly, the covarion parameter has little effect on the tree quality for any of the algorithms. FASTME is freely available on the web.

  19. Potential energy surface fitting by a statistically localized, permutationally invariant, local interpolating moving least squares method for the many-body potential: Method and application to N{sub 4}

    SciTech Connect

    Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V. E-mail: candler@aem.umn.edu; Truhlar, Donald G. E-mail: candler@aem.umn.edu

    2014-02-07

    Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with a review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.

  20. Using Weighted Least Squares Regression for Obtaining Langmuir Sorption Constants

    Technology Transfer Automated Retrieval System (TEKTRAN)

    One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...

  1. Tensor hypercontraction. II. Least-squares renormalization

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David

    2012-12-01

    The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)], 10.1063/1.4732310. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1/r12 operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N^5) effort if exact integrals are decomposed, or O(N^4) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N^4) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry.

  2. Tensor hypercontraction. II. Least-squares renormalization.

    PubMed

    Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David

    2012-12-14

    The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)]. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1∕r(12) operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N(5)) effort if exact integrals are decomposed, or O(N(4)) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N(4)) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry. PMID:23248986

  3. Götterdämmerung over total least squares

    NASA Astrophysics Data System (ADS)

    Malissiovas, G.; Neitzel, F.; Petrovic, S.

    2016-06-01

    The traditional way of solving non-linear least squares (LS) problems in Geodesy includes a linearization of the functional model and iterative solution of a nonlinear equation system. Direct solutions for a class of nonlinear adjustment problems have been presented by the mathematical community since the 1980s, based on total least squares (TLS) algorithms and involving the use of singular value decomposition (SVD). However, direct LS solutions for this class of problems have been developed in the past also by geodesists. In this contributionwe attempt to establish a systematic approach for direct solutions of non-linear LS problems from a "geodetic" point of view. Therefore, four non-linear adjustment problems are investigated: the fit of a straight line to given points in 2D and in 3D, the fit of a plane in 3D and the 2D symmetric similarity transformation of coordinates. For all these problems a direct LS solution is derived using the same methodology by transforming the problem to the solution of a quadratic or cubic algebraic equation. Furthermore, by applying TLS all these four problems can be transformed to solving the respective characteristic eigenvalue equations. It is demonstrated that the algebraic equations obtained in this way are identical with those resulting from the LS approach. As a by-product of this research two novel approaches are presented for the TLS solutions of fitting a straight line to 3D and the 2D similarity transformation of coordinates. The derived direct solutions of the four considered problems are illustrated on examples from the literature and also numerically compared to published iterative solutions.

  4. Understanding Least Squares through Monte Carlo Calculations

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2005-01-01

    The method of least squares (LS) is considered as an important data analysis tool available to physical scientists. The mathematics of linear least squares(LLS) is summarized in a very compact matrix rotation that renders it practically "formulaic".

  5. Source Localization using Stochastic Approximation and Least Squares Methods

    SciTech Connect

    Sahyoun, Samir S.; Djouadi, Seddik M.; Qi, Hairong; Drira, Anis

    2009-03-05

    This paper presents two approaches to locate the source of a chemical plume; Nonlinear Least Squares and Stochastic Approximation (SA) algorithms. Concentration levels of the chemical measured by special sensors are used to locate this source. Non-linear Least Squares technique is applied at different noise levels and compared with the localization using SA. For a noise corrupted data collected from a distributed set of chemical sensors, we show that SA methods are more efficient than Least Squares method. SA methods are often better at coping with noisy input information than other search methods.

  6. NLINEAR - NONLINEAR CURVE FITTING PROGRAM

    NASA Technical Reports Server (NTRS)

    Everhart, J. L.

    1994-01-01

    A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.

  7. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  8. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  9. Augmented classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2004-02-03

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  10. Generalized adjustment by least squares ( GALS).

    USGS Publications Warehouse

    Elassal, A.A.

    1983-01-01

    The least-squares principle is universally accepted as the basis for adjustment procedures in the allied fields of geodesy, photogrammetry and surveying. A prototype software package for Generalized Adjustment by Least Squares (GALS) is described. The package is designed to perform all least-squares-related functions in a typical adjustment program. GALS is capable of supporting development of adjustment programs of any size or degree of complexity. -Author

  11. Tutorial on a chemical model building by least-squares non-linear regression of multiwavelength spectrophotometric pH-titration data.

    PubMed

    Meloun, Milan; Bordovská, Sylva; Syrový, Tomás; Vrána, Ales

    2006-10-27

    Although the modern instrumentation enables for the increased amount of data to be delivered in shorter time, computer-assisted spectra analysis is limited by the intelligence and by the programmed logic tool applications. Proposed tutorial covers all the main steps of the data processing which involve the chemical model building, from calculating the concentration profiles and, using spectra regression, fitting the protonation constants of the chemical model to multiwavelength and multivariate data measured. Suggested diagnostics are examined to see whether the chemical model hypothesis can be accepted, as an incorrect model with false stoichiometric indices may lead to slow convergence, cyclization or divergence of the regression process minimization. Diagnostics concern the physical meaning of unknown parameters beta(qr) and epsilon(qr), physical sense of associated species concentrations, parametric correlation coefficients, goodness-of-fit tests, error analyses and spectra deconvolution, and the correct number of light-absorbing species determination. All of the benefits of spectrophotometric data analysis are demonstrated on the protonation constants of the ionizable anticancer drug 7-ethyl-10-hydroxycamptothecine, using data double checked with the SQUAD(84) and SPECFIT/32 regression programs and with factor analysis of the INDICES program. The experimental determination of protonation constants with their computational prediction based on a knowledge of chemical structures of the drug was through the combined MARVIN and PALLAS programs. If the proposed model adequately represents the data, the residuals should form a random pattern with a normal distribution N(0, s2), with the residual mean equal to zero, and the standard deviation of residuals being near to experimental noise. Examination of residual plots may be assisted by a graphical analysis of residuals, and systematic departures from randomness indicate that the model and parameter estimates are not

  12. Direct forecasting of subsurface flow response from non-linear dynamic data by linear least-squares in canonical functional principal component space

    NASA Astrophysics Data System (ADS)

    Satija, Aaditya; Caers, Jef

    2015-03-01

    Inverse modeling is widely used to assist with forecasting problems in the subsurface. However, full inverse modeling can be time-consuming requiring iteration over a high dimensional parameter space with computationally expensive forward models and complex spatial priors. In this paper, we investigate a prediction-focused approach (PFA) that aims at building a statistical relationship between data variables and forecast variables, avoiding the inversion of model parameters altogether. The statistical relationship is built by first applying the forward model related to the data variables and the forward model related to the prediction variables on a limited set of spatial prior models realizations, typically generated through geostatistical methods. The relationship observed between data and prediction is highly non-linear for many forecasting problems in the subsurface. In this paper we propose a Canonical Functional Component Analysis (CFCA) to map the data and forecast variables into a low-dimensional space where, if successful, the relationship is linear. CFCA consists of (1) functional principal component analysis (FPCA) for dimension reduction of time-series data and (2) canonical correlation analysis (CCA); the latter aiming to establish a linear relationship between data and forecast components. If such mapping is successful, then we illustrate with several cases that (1) simple regression techniques with a multi-Gaussian framework can be used to directly quantify uncertainty on the forecast without any model inversion and that (2) such uncertainty is a good approximation of uncertainty obtained from full posterior sampling with rejection sampling.

  13. Collinearity in Least-Squares Analysis

    ERIC Educational Resources Information Center

    de Levie, Robert

    2012-01-01

    How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…

  14. Weighted conditional least-squares estimation

    SciTech Connect

    Booth, J.G.

    1987-01-01

    A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered.

  15. Curve Fitting via the Criterion of Least Squares. Applications of Algebra and Elementary Calculus to Curve Fitting. [and] Linear Programming in Two Dimensions: I. Applications of High School Algebra to Operations Research. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Units 321, 453.

    ERIC Educational Resources Information Center

    Alexander, John W., Jr.; Rosenberg, Nancy S.

    This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…

  16. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  17. A spectral mimetic least-squares method

    SciTech Connect

    Bochev, Pavel; Gerritsma, Marc

    2014-09-01

    We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are also satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.

  18. A spectral mimetic least-squares method

    DOE PAGESBeta

    Bochev, Pavel; Gerritsma, Marc

    2014-09-01

    We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are alsomore » satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.« less

  19. Steady and transient least square solvers for thermal problems

    NASA Technical Reports Server (NTRS)

    Padovan, Joe

    1987-01-01

    This paper develops a hierarchical least square solution algorithm for highly nonlinear heat transfer problems. The methodology's capability is such that both steady and transient implicit formulations can be handled. This includes problems arising from highly nonlinear heat transfer systems modeled by either finite-element or finite-difference schemes. The overall procedure developed enables localized updating, iteration, and convergence checking as well as constraint application. The localized updating can be performed at a variety of hierarchical levels, i.e., degree of freedom, substructural, material-nonlinear groups, and/or boundary groups. The choice of such partitions can be made via energy partitioning or nonlinearity levels as well as by user selection. Overall, this leads to extremely robust computational characteristics. To demonstrate the methodology, problems are drawn from nonlinear heat conduction. These are used to quantify the robust capabilities of the hierarchical least square scheme.

  20. Weighted least squares stationary approximations to linear systems.

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.

    1972-01-01

    Investigation of the problem of replacing a certain time-varying linear system by a stationary one. Several quadratic criteria are proposed to aid in determining suitable candidate systems. One criterion for choosing the matrix B (in the stationary system) is initial-condition dependent, and another bounds the 'worst case' homogeneous system performance. Both of these criteria produce weighted least square fits.

  1. Using Least Squares to Solve Systems of Equations

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2016-01-01

    The method of least squares (LS) yields exact solutions for the adjustable parameters when the number of data values n equals the number of parameters "p". This holds also when the fit model consists of "m" different equations and "m = p", which means that LS algorithms can be used to obtain solutions to systems of…

  2. Kriging and its relation to least squares

    SciTech Connect

    Oden, N.

    1984-11-01

    Kriging is a technique for producing contour maps that, under certain conditions, are optimal in a mean squared error sense. The relation of Kriging to Least Squares is reviewed here. New methods for analyzing residuals are suggsted, ML estimators inspected, and an expression derived for calculating cross-validation error. An example using ground water data is provided.

  3. Combinatorics of least-squares trees.

    PubMed

    Mihaescu, Radu; Pachter, Lior

    2008-09-01

    A recurring theme in the least-squares approach to phylogenetics has been the discovery of elegant combinatorial formulas for the least-squares estimates of edge lengths. These formulas have proved useful for the development of efficient algorithms, and have also been important for understanding connections among popular phylogeny algorithms. For example, the selection criterion of the neighbor-joining algorithm is now understood in terms of the combinatorial formulas of Pauplin for estimating tree length. We highlight a phylogenetically desirable property that weighted least-squares methods should satisfy, and provide a complete characterization of methods that satisfy the property. The necessary and sufficient condition is a multiplicative four-point condition that the variance matrix needs to satisfy. The proof is based on the observation that the Lagrange multipliers in the proof of the Gauss-Markov theorem are tree-additive. Our results generalize and complete previous work on ordinary least squares, balanced minimum evolution, and the taxon-weighted variance model. They also provide a time-optimal algorithm for computation.

  4. Least squares estimation of avian molt rates

    USGS Publications Warehouse

    Johnson, D.H.

    1989-01-01

    A straightforward least squares method of estimating the rate at which birds molt feathers is presented, suitable for birds captured more than once during the period of molt. The date of molt onset can also be estimated. The method is applied to male and female mourning doves.

  5. A Limitation with Least Squares Predictions

    ERIC Educational Resources Information Center

    Bittner, Teresa L.

    2013-01-01

    Although researchers have documented that some data make larger contributions than others to predictions made with least squares models, it is relatively unknown that some data actually make no contribution to the predictions produced by these models. This article explores such noncontributory data. (Contains 1 table and 2 figures.)

  6. Iterative methods for weighted least-squares

    SciTech Connect

    Bobrovnikova, E.Y.; Vavasis, S.A.

    1996-12-31

    A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.

  7. New approach to breast cancer CAD using partial least squares and kernel-partial least squares

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Heine, John; Embrechts, Mark; Smith, Tom; Choma, Robert; Wong, Lut

    2005-04-01

    Breast cancer is second only to lung cancer as a tumor-related cause of death in women. Currently, the method of choice for the early detection of breast cancer is mammography. While sensitive to the detection of breast cancer, its positive predictive value (PPV) is low, resulting in biopsies that are only 15-34% likely to reveal malignancy. This paper explores the use of two novel approaches called Partial Least Squares (PLS) and Kernel-PLS (K-PLS) to the diagnosis of breast cancer. The approach is based on optimization for the partial least squares (PLS) algorithm for linear regression and the K-PLS algorithm for non-linear regression. Preliminary results show that both the PLS and K-PLS paradigms achieved comparable results with three separate support vector learning machines (SVLMs), where these SVLMs were known to have been trained to a global minimum. That is, the average performance of the three separate SVLMs were Az = 0.9167927, with an average partial Az (Az90) = 0.5684283. These results compare favorably with the K-PLS paradigm, which obtained an Az = 0.907 and partial Az = 0.6123. The PLS paradigm provided comparable results. Secondly, both the K-PLS and PLS paradigms out performed the ANN in that the Az index improved by about 14% (Az ~ 0.907 compared to the ANN Az of ~ 0.8). The "Press R squared" value for the PLS and K-PLS machine learning algorithms were 0.89 and 0.9, respectively, which is in good agreement with the other MOP values.

  8. Parallel block schemes for large scale least squares computations

    SciTech Connect

    Golub, G.H.; Plemmons, R.J.; Sameh, A.

    1986-04-01

    Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment of the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.

  9. Least squares restoration of multichannel images

    NASA Technical Reports Server (NTRS)

    Galatsanos, Nikolas P.; Katsaggelos, Aggelos K.; Chin, Roland T.; Hillery, Allen D.

    1991-01-01

    Multichannel restoration using both within- and between-channel deterministic information is considered. A multichannel image is a set of image planes that exhibit cross-plane similarity. Existing optimal restoration filters for single-plane images yield suboptimal results when applied to multichannel images, since between-channel information is not utilized. Multichannel least squares restoration filters are developed using the set theoretic and the constrained optimization approaches. A geometric interpretation of the estimates of both filters is given. Color images (three-channel imagery with red, green, and blue components) are considered. Constraints that capture the within- and between-channel properties of color images are developed. Issues associated with the computation of the two estimates are addressed. A spatially adaptive, multichannel least squares filter that utilizes local within- and between-channel image properties is proposed. Experiments using color images are described.

  10. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2002-01-01

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  11. Total least squares for anomalous change detection

    NASA Astrophysics Data System (ADS)

    Theiler, James; Matsekh, Anna M.

    2010-04-01

    A family of subtraction-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQbased anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and special cases of it are equivalent to canonical correlation analysis and optimized covariance equalization. What whitened TLSQ offers is a generalization of these algorithms with the potential for better performance.

  12. Bootstrapping least-squares estimates in biochemical reaction networks.

    PubMed

    Linder, Daniel F; Rempała, Grzegorz A

    2015-01-01

    The paper proposes new computational methods of computing confidence bounds for the least-squares estimates (LSEs) of rate constants in mass action biochemical reaction network and stochastic epidemic models. Such LSEs are obtained by fitting the set of deterministic ordinary differential equations (ODEs), corresponding to the large-volume limit of a reaction network, to network's partially observed trajectory treated as a continuous-time, pure jump Markov process. In the large-volume limit the LSEs are asymptotically Gaussian, but their limiting covariance structure is complicated since it is described by a set of nonlinear ODEs which are often ill-conditioned and numerically unstable. The current paper considers two bootstrap Monte-Carlo procedures, based on the diffusion and linear noise approximations for pure jump processes, which allow one to avoid solving the limiting covariance ODEs. The results are illustrated with both in-silico and real data examples from the LINE 1 gene retrotranscription model and compared with those obtained using other methods. PMID:25898769

  13. Bootstrapping Least Squares Estimates in Biochemical Reaction Networks

    PubMed Central

    Linder, Daniel F.

    2015-01-01

    The paper proposes new computational methods of computing confidence bounds for the least squares estimates (LSEs) of rate constants in mass-action biochemical reaction network and stochastic epidemic models. Such LSEs are obtained by fitting the set of deterministic ordinary differential equations (ODEs), corresponding to the large volume limit of a reaction network, to network’s partially observed trajectory treated as a continuous-time, pure jump Markov process. In the large volume limit the LSEs are asymptotically Gaussian, but their limiting covariance structure is complicated since it is described by a set of nonlinear ODEs which are often ill-conditioned and numerically unstable. The current paper considers two bootstrap Monte-Carlo procedures, based on the diffusion and linear noise approximations for pure jump processes, which allow one to avoid solving the limiting covariance ODEs. The results are illustrated with both in-silico and real data examples from the LINE 1 gene retrotranscription model and compared with those obtained using other methods. PMID:25898769

  14. Bootstrapping least-squares estimates in biochemical reaction networks.

    PubMed

    Linder, Daniel F; Rempała, Grzegorz A

    2015-01-01

    The paper proposes new computational methods of computing confidence bounds for the least-squares estimates (LSEs) of rate constants in mass action biochemical reaction network and stochastic epidemic models. Such LSEs are obtained by fitting the set of deterministic ordinary differential equations (ODEs), corresponding to the large-volume limit of a reaction network, to network's partially observed trajectory treated as a continuous-time, pure jump Markov process. In the large-volume limit the LSEs are asymptotically Gaussian, but their limiting covariance structure is complicated since it is described by a set of nonlinear ODEs which are often ill-conditioned and numerically unstable. The current paper considers two bootstrap Monte-Carlo procedures, based on the diffusion and linear noise approximations for pure jump processes, which allow one to avoid solving the limiting covariance ODEs. The results are illustrated with both in-silico and real data examples from the LINE 1 gene retrotranscription model and compared with those obtained using other methods.

  15. Source allocation by least-squares hydrocarbon fingerprint matching

    SciTech Connect

    William A. Burns; Stephen M. Mudge; A. Edward Bence; Paul D. Boehm; John S. Brown; David S. Page; Keith R. Parker

    2006-11-01

    There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS. 15 refs., 5 figs.

  16. Source allocation by least-squares hydrocarbon fingerprint matching.

    PubMed

    Burns, William A; Mudge, Stephen M; Bence, A Edward; Boehm, Paul D; Brown, John S; Page, David S; Parker, Keith R

    2006-11-01

    There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS. PMID:17144278

  17. Least Squares Estimation of Item Response Theory Linking Coefficients.

    ERIC Educational Resources Information Center

    Ogasawara, Haruhiko

    2001-01-01

    Discusses three types of least squares estimation (generalized, unweighted, and weighted). Results from a Monte Carlo simulation show that, in comparison with other least squares methods, the weighted least squared method generally reduced bias without increasing asymptotic standard errors. (SLD)

  18. Total least squares for anomalous change detection

    SciTech Connect

    Theiler, James P; Matsekh, Anna M

    2010-01-01

    A family of difference-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQ-based anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and furthermore it is shown to be equivalent to the optimized covariance equalization algorithm. What whitened TLSQ offers, in addition to connecting with a common language the derivations of two of the most popular anomalous change detection algorithms - chronochrome and covariance equalization - is a generalization of these algorithms with the potential for better performance.

  19. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2004-03-23

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  20. Classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.

    2002-01-01

    An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.

  1. Vehicle detection using partial least squares.

    PubMed

    Kembhavi, Aniruddha; Harwood, David; Davis, Larry S

    2011-06-01

    Detecting vehicles in aerial images has a wide range of applications, from urban planning to visual surveillance. We describe a vehicle detector that improves upon previous approaches by incorporating a very large and rich set of image descriptors. A new feature set called Color Probability Maps is used to capture the color statistics of vehicles and their surroundings, along with the Histograms of Oriented Gradients feature and a simple yet powerful image descriptor that captures the structural characteristics of objects named Pairs of Pixels. The combination of these features leads to an extremely high-dimensional feature set (approximately 70,000 elements). Partial Least Squares is first used to project the data onto a much lower dimensional sub-space. Then, a powerful feature selection analysis is employed to improve the performance while vastly reducing the number of features that must be calculated. We compare our system to previous approaches on two challenging data sets and show superior performance.

  2. Least squares estimation in stochastic biochemical networks.

    PubMed

    Rempala, Grzegorz A

    2012-08-01

    The paper presents results on the asymptotic properties of the least-squares estimates (LSEs) of the reaction constants in mass-action, stochastic, biochemical network models. LSEs are assumed to be based on the longitudinal data from partially observed trajectories of a stochastic dynamical system, modeled as a continuous-time, pure jump Markov process. Under certain regularity conditions on such a process, it is shown that the vector of LSEs is jointly consistent and asymptotically normal, with the asymptotic covariance structure given in terms of a system of ordinary differential equations (ODE). The derived asymptotic properties hold true as the biochemical network size (the total species number) increases, in which case the stochastic dynamical system converges to the deterministic mass-action ODE. An example is provided, based on synthetic as well as RT-PCR data from the retro-transcription network of the LINE1 gene.

  3. Least-Squares Self-Calibration of Imaging Array Data

    NASA Technical Reports Server (NTRS)

    Arendt, R. G.; Moseley, S. H.; Fixsen, D. J.

    2004-01-01

    When arrays are used to collect multiple appropriately-dithered images of the same region of sky, the resulting data set can be calibrated using a least-squares minimization procedure that determines the optimal fit between the data and a model of that data. The model parameters include the desired sky intensities as well as instrument parameters such as pixel-to-pixel gains and offsets. The least-squares solution simultaneously provides the formal error estimates for the model parameters. With a suitable observing strategy, the need for separate calibration observations is reduced or eliminated. We show examples of this calibration technique applied to HST NICMOS observations of the Hubble Deep Fields and simulated SIRTF IRAC observations.

  4. Forecasting Istanbul monthly temperature by multivariate partial least square

    NASA Astrophysics Data System (ADS)

    Ertaç, Mefharet; Firuzan, Esin; Solum, Şenol

    2015-07-01

    Weather forecasting, especially for temperature, has always been a popular subject since it affects our daily life and always includes uncertainty as statistics does. The goals of this study are (a) to forecast monthly mean temperature by benefitting meteorological variables like temperature, humidity and rainfall; and (b) to improve the forecast ability by evaluating the forecasting errors depending upon the parameter changes and local or global forecasting methods. Approximately 100 years of meteorological data from 54 automatic meteorology observation stations of Istanbul that is the mega city of Turkey are analyzed to infer about the meteorological behaviour of the city. A new partial least square (PLS) forecasting technique based on chaotic analysis is also developed by using nonlinear time series and variable selection methods. The proposed model is also compared with artificial neural networks (ANNs), which model nonlinearly the relation between inputs and outputs by working neurons like human brain. Ordinary least square (OLS), PLS and ANN methods are used for nonlinear time series forecasting in this study. Major findings are the chaotic nature of the meteorological data of Istanbul and the best performance values of the proposed PLS model.

  5. On the Significance of Properly Weighting Sorption Data for Least Squares Analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...

  6. Kernel-based least squares policy iteration for reinforcement learning.

    PubMed

    Xu, Xin; Hu, Dewen; Lu, Xicheng

    2007-07-01

    In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating

  7. Linear least squares compartmental-model-independent parameter identification in PET.

    PubMed

    Thie, J A; Smith, G T; Hubner, K F

    1997-02-01

    A simplified approach involving linear-regression straight-line parameter fitting of dynamic scan data is developed for both specific and nonspecific models. Where compartmental-model topologies apply, the measured activity may be expressed in terms of: its integrals, plasma activity and plasma integrals--all in a linear expression with macroparameters as coefficients. Multiple linear regression, as in spreadsheet software, determines parameters for best data fits. Positron emission tomography (PET)-acquired gray-matter images in a dynamic scan are analyzed: both by this method and by traditional iterative nonlinear least squares. Both patient and simulated data were used. Regression and traditional methods are in expected agreement. Monte-Carlo simulations evaluate parameter standard deviations, due to data noise, and much smaller noise-induced biases. Unique straight-line graphical displays permit visualizing data influences on various macroparameters as changes in slopes. Advantages of regression fitting are: simplicity, speed, ease of implementation in spreadsheet software, avoiding risks of convergence failures or false solutions in iterative least squares, and providing various visualizations of the uptake process by straight line graphical displays. Multiparameter model-independent analyses on lesser understood systems is also made possible.

  8. Optimal control by least squares support vector machines.

    PubMed

    Suykens, J A; Vandewalle, J; De Moor, B

    2001-01-01

    Support vector machines have been very successful in pattern recognition and function estimation problems. In this paper we introduce the use of least squares support vector machines (LS-SVM's) for the optimal control of nonlinear systems. Linear and neural full static state feedback controllers are considered. The problem is formulated in such a way that it incorporates the N-stage optimal control problem as well as a least squares support vector machine approach for mapping the state space into the action space. The solution is characterized by a set of nonlinear equations. An alternative formulation as a constrained nonlinear optimization problem in less unknowns is given, together with a method for imposing local stability in the LS-SVM control scheme. The results are discussed for support vector machines with radial basis function kernel. Advantages of LS-SVM control are that no number of hidden units has to be determined for the controller and that no centers have to be specified for the Gaussian kernels when applying Mercer's condition. The curse of dimensionality is avoided in comparison with defining a regular grid for the centers in classical radial basis function networks. This is at the expense of taking the trajectory of state variables as additional unknowns in the optimization problem, while classical neural network approaches typically lead to parametric optimization problems. In the SVM methodology the number of unknowns equals the number of training data, while in the primal space the number of unknowns can be infinite dimensional. The method is illustrated both on stabilization and tracking problems including examples on swinging up an inverted pendulum with local stabilization at the endpoint and a tracking problem for a ball and beam system.

  9. Orthogonal least squares learning algorithm for radial basis function networks.

    PubMed

    Chen, S; Cowan, C N; Grant, P M

    1991-01-01

    The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular-value decomposition to solve for the weights of the network. Such a procedure has several drawbacks, and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The authors propose an alternative learning procedure based on the orthogonal least-squares method. The procedure chooses radial basis function centers one by one in a rational way until an adequate network has been constructed. In the algorithm, each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least-squares learning strategy provides a simple and efficient means for fitting radial basis function networks. This is illustrated using examples taken from two different signal processing applications.

  10. Estimating parameter of influenza transmission using regularized least square

    NASA Astrophysics Data System (ADS)

    Nuraini, N.; Syukriah, Y.; Indratno, S. W.

    2014-02-01

    Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.

  11. Least-squares sequential parameter and state estimation for large space structures

    NASA Technical Reports Server (NTRS)

    Thau, F. E.; Eliazov, T.; Montgomery, R. C.

    1982-01-01

    This paper presents the formulation of simultaneous state and parameter estimation problems for flexible structures in terms of least-squares minimization problems. The approach combines an on-line order determination algorithm, with least-squares algorithms for finding estimates of modal approximation functions, modal amplitudes, and modal parameters. The approach combines previous results on separable nonlinear least squares estimation with a regression analysis formulation of the state estimation problem. The technique makes use of sequential Householder transformations. This allows for sequential accumulation of matrices required during the identification process. The technique is used to identify the modal prameters of a flexible beam.

  12. Orthogonal least squares based complex-valued functional link network.

    PubMed

    Amin, Md Faijul; Savitha, Ramasamy; Amin, Muhammad Ilias; Murase, Kazuyuki

    2012-08-01

    Functional link networks are single-layered neural networks that impose nonlinearity in the input layer using nonlinear functions of the original input variables. In this paper, we present a fully complex-valued functional link network (CFLN) with multivariate polynomials as the nonlinear functions. Unlike multilayer neural networks, the CFLN is free from local minima problem, and it offers very fast learning of parameters because of its linear structure. Polynomial based CFLN does not require an activation function which is a major concern in the complex-valued neural networks. However, it is important to select a smaller subset of polynomial terms (monomials) for faster and better performance since the number of all possible monomials may be quite large. Here, we use the orthogonal least squares (OLS) method in a constructive fashion (starting from lower degree to higher) for the selection of a parsimonious subset of monomials. It is argued here that computing CFLN in purely complex domain is advantageous than in double-dimensional real domain, in terms of number of connection parameters, faster design, and possibly generalization performance. Simulation results on a function approximation, wind prediction with real-world data, and a nonlinear channel equalization problem exhibit that the OLS based CFLN yields very simple structure having favorable performance.

  13. Orthogonal least squares based complex-valued functional link network.

    PubMed

    Amin, Md Faijul; Savitha, Ramasamy; Amin, Muhammad Ilias; Murase, Kazuyuki

    2012-08-01

    Functional link networks are single-layered neural networks that impose nonlinearity in the input layer using nonlinear functions of the original input variables. In this paper, we present a fully complex-valued functional link network (CFLN) with multivariate polynomials as the nonlinear functions. Unlike multilayer neural networks, the CFLN is free from local minima problem, and it offers very fast learning of parameters because of its linear structure. Polynomial based CFLN does not require an activation function which is a major concern in the complex-valued neural networks. However, it is important to select a smaller subset of polynomial terms (monomials) for faster and better performance since the number of all possible monomials may be quite large. Here, we use the orthogonal least squares (OLS) method in a constructive fashion (starting from lower degree to higher) for the selection of a parsimonious subset of monomials. It is argued here that computing CFLN in purely complex domain is advantageous than in double-dimensional real domain, in terms of number of connection parameters, faster design, and possibly generalization performance. Simulation results on a function approximation, wind prediction with real-world data, and a nonlinear channel equalization problem exhibit that the OLS based CFLN yields very simple structure having favorable performance. PMID:22386786

  14. Evaluation of fuzzy inference systems using fuzzy least squares

    NASA Technical Reports Server (NTRS)

    Barone, Joseph M.

    1992-01-01

    Efforts to develop evaluation methods for fuzzy inference systems which are not based on crisp, quantitative data or processes (i.e., where the phenomenon the system is built to describe or control is inherently fuzzy) are just beginning. This paper suggests that the method of fuzzy least squares can be used to perform such evaluations. Regressing the desired outputs onto the inferred outputs can provide both global and local measures of success. The global measures have some value in an absolute sense, but they are particularly useful when competing solutions (e.g., different numbers of rules, different fuzzy input partitions) are being compared. The local measure described here can be used to identify specific areas of poor fit where special measures (e.g., the use of emphatic or suppressive rules) can be applied. Several examples are discussed which illustrate the applicability of the method as an evaluation tool.

  15. 2-D weighted least-squares phase unwrapping

    DOEpatents

    Ghiglia, Dennis C.; Romero, Louis A.

    1995-01-01

    Weighted values of interferometric signals are unwrapped by determining the least squares solution of phase unwrapping for unweighted values of the interferometric signals; and then determining the least squares solution of phase unwrapping for weighted values of the interferometric signals by preconditioned conjugate gradient methods using the unweighted solutions as preconditioning values. An output is provided that is representative of the least squares solution of phase unwrapping for weighted values of the interferometric signals.

  16. 2-D weighted least-squares phase unwrapping

    DOEpatents

    Ghiglia, D.C.; Romero, L.A.

    1995-06-13

    Weighted values of interferometric signals are unwrapped by determining the least squares solution of phase unwrapping for unweighted values of the interferometric signals; and then determining the least squares solution of phase unwrapping for weighted values of the interferometric signals by preconditioned conjugate gradient methods using the unweighted solutions as preconditioning values. An output is provided that is representative of the least squares solution of phase unwrapping for weighted values of the interferometric signals. 6 figs.

  17. Spreadsheet for designing valid least-squares calibrations: A tutorial.

    PubMed

    Bettencourt da Silva, Ricardo J N

    2016-02-01

    Instrumental methods of analysis are used to define the price of goods, the compliance of products with a regulation, or the outcome of fundamental or applied research. These methods can only play their role properly if reported information is objective and their quality is fit for the intended use. If measurement results are reported with an adequately small measurement uncertainty both of these goals are achieved. The evaluation of the measurement uncertainty can be performed by the bottom-up approach, that involves a detailed description of the measurement process, or using a pragmatic top-down approach that quantify major uncertainty components from global performance data. The bottom-up approach is not so frequently used due to the need to master the quantification of individual components responsible for random and systematic effects that affect measurement results. This work presents a tutorial that can be easily used by non-experts in the accurate evaluation of the measurement uncertainty of instrumental methods of analysis calibrated using least-squares regressions. The tutorial includes the definition of the calibration interval, the assessments of instrumental response homoscedasticity, the definition of calibrators preparation procedure required for least-squares regression model application, the assessment of instrumental response linearity and the evaluation of measurement uncertainty. The developed measurement model is only applicable in calibration ranges where signal precision is constant. A MS-Excel file is made available to allow the easy application of the tutorial. This tool can be useful for cases where top-down approaches cannot produce results with adequately low measurement uncertainty. An example of the application of this tool to the determination of nitrate in water by ion chromatography is presented.

  18. Orthogonalizing EM: A design-based least squares algorithm

    PubMed Central

    Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z. G.

    2016-01-01

    We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p. Supplementary materials for this article are available online. PMID:27499558

  19. Optimisation on the least squares identification of dynamical systems with application to hemodynamic modelling.

    PubMed

    Pan, Yi; Zheng, Ying; Harris, Sam; Coca, Daniel; Johnston, David; Mayhew, John; Billings, Stephen

    2009-01-01

    Dynamic modelling using the traditional least squares method with noisy input/output data can yield biased and sometimes unstable model predictions. This is largely because the cost function employed by the traditional least squares method is based on the one-step-ahead prediction errors. In this paper, the model-predicted-output errors are used in estimating the model parameters. As the cost function is highly nonlinear in terms of the model parameters, the particle swarm optimisation method is used to search for the optimal parameters. We will show that compared with model predictions using the traditional least squares method, the model-predicted-output approach is more robust at dealing with noisy input/output data. The algorithm is applied to identify the dynamic relationship between changes in cerebral blood flow and volume due to evoked changes in neural activity and is shown to produce better predictions than that using the least squares method.

  20. Optimal least-squares finite element method for elliptic problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Povinelli, Louis A.

    1991-01-01

    An optimal least squares finite element method is proposed for two dimensional and three dimensional elliptic problems and its advantages are discussed over the mixed Galerkin method and the usual least squares finite element method. In the usual least squares finite element method, the second order equation (-Delta x (Delta u) + u = f) is recast as a first order system (-Delta x p + u = f, Delta u - p = 0). The error analysis and numerical experiment show that, in this usual least squares finite element method, the rate of convergence for flux p is one order lower than optimal. In order to get an optimal least squares method, the irrotationality Delta x p = 0 should be included in the first order system.

  1. Parsimonious extreme learning machine using recursive orthogonal least squares.

    PubMed

    Wang, Ning; Er, Meng Joo; Han, Min

    2014-10-01

    Novel constructive and destructive parsimonious extreme learning machines (CP- and DP-ELM) are proposed in this paper. By virtue of the proposed ELMs, parsimonious structure and excellent generalization of multiinput-multioutput single hidden-layer feedforward networks (SLFNs) are obtained. The proposed ELMs are developed by innovative decomposition of the recursive orthogonal least squares procedure into sequential partial orthogonalization (SPO). The salient features of the proposed approaches are as follows: 1) Initial hidden nodes are randomly generated by the ELM methodology and recursively orthogonalized into an upper triangular matrix with dramatic reduction in matrix size; 2) the constructive SPO in the CP-ELM focuses on the partial matrix with the subcolumn of the selected regressor including nonzeros as the first column while the destructive SPO in the DP-ELM operates on the partial matrix including elements determined by the removed regressor; 3) termination criteria for CP- and DP-ELM are simplified by the additional residual error reduction method; and 4) the output weights of the SLFN need not be solved in the model selection procedure and is derived from the final upper triangular equation by backward substitution. Both single- and multi-output real-world regression data sets are used to verify the effectiveness and superiority of the CP- and DP-ELM in terms of parsimonious architecture and generalization accuracy. Innovative applications to nonlinear time-series modeling demonstrate superior identification results. PMID:25291736

  2. Applied Algebra: The Modeling Technique of Least Squares

    ERIC Educational Resources Information Center

    Zelkowski, Jeremy; Mayes, Robert

    2008-01-01

    The article focuses on engaging students in algebra through modeling real-world problems. The technique of least squares is explored, encouraging students to develop a deeper understanding of the method. (Contains 2 figures and a bibliography.)

  3. A Comparison of the Method of Least Squares and the Method of Averages. Classroom Notes

    ERIC Educational Resources Information Center

    Glaister, P.

    2004-01-01

    Two techniques for determining a straight line fit to data are compared. This article reviews two simple techniques for fitting a straight line to a set of data, namely the method of averages and the method of least squares. These methods are compared by showing the results of a simple analysis, together with a number of tests based on randomized…

  4. Measurement and fitting techniques for the assessment of material nonlinearity using nonlinear Rayleigh waves

    SciTech Connect

    Torello, David; Kim, Jin-Yeon; Qu, Jianmin; Jacobs, Laurence J.

    2015-03-31

    This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. These experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.

  5. Least-squares RTM with L1 norm regularisation

    NASA Astrophysics Data System (ADS)

    Wu, Di; Yao, Gang; Cao, Jingjie; Wang, Yanghua

    2016-10-01

    Reverse time migration (RTM), for imaging complex Earth models, is a reversal procedure of the forward modelling of seismic wavefields, and hence can be formulated as an inverse problem. The least-squares RTM method attempts to minimise the difference between the observed field data and the synthetic data generated by the migration image. It can reduce the artefacts in the images of a conventional RTM which uses an adjoint operator, instead of an inverse operator, for the migration. However, as the least-squares inversion provides an average solution with minimal variation, the resolution of the reflectivity image is compromised. This paper presents the least-squares RTM method with a model constraint defined by an L1-norm of the reflectivity image. For solving the least-squares RTM with L1 norm regularisation, the inversion is reformulated as a ‘basis pursuit de-noise (BPDN)’ problem, and is solved directly using an algorithm called ‘spectral projected gradient for L1 minimisation (SPGL1)’. Three numerical examples demonstrate the effectiveness of the method which can mitigate artefacts and produce clean images with significantly higher resolution than the least-squares RTM without such a constraint.

  6. A Least-Squares Transport Equation Compatible with Voids

    SciTech Connect

    Hansen, Jon; Peterson, Jacob; Morel, Jim; Ragusa, Jean; Wang, Yaqi

    2014-12-01

    Standard second-order self-adjoint forms of the transport equation, such as the even-parity, odd-parity, and self-adjoint angular flux equation, cannot be used in voids. Perhaps more important, they experience numerical convergence difficulties in near-voids. Here we present a new form of a second-order self-adjoint transport equation that has an advantage relative to standard forms in that it can be used in voids or near-voids. Our equation is closely related to the standard least-squares form of the transport equation with both equations being applicable in a void and having a nonconservative analytic form. However, unlike the standard least-squares form of the transport equation, our least-squares equation is compatible with source iteration. It has been found that the standard least-squares form of the transport equation with a linear-continuous finite-element spatial discretization has difficulty in the thick diffusion limit. Here we extensively test the 1D slab-geometry version of our scheme with respect to void solutions, spatial convergence rate, and the intermediate and thick diffusion limits. We also define an effective diffusion synthetic acceleration scheme for our discretization. Our conclusion is that our least-squares Sn formulation represents an excellent alternative to existing second-order Sn transport formulations

  7. Integer least-squares theory for the GNSS compass

    NASA Astrophysics Data System (ADS)

    Teunissen, P. J. G.

    2010-07-01

    Global navigation satellite system (GNSS) carrier phase integer ambiguity resolution is the key to high-precision positioning and attitude determination. In this contribution, we develop new integer least-squares (ILS) theory for the GNSS compass model, together with efficient integer search strategies. It extends current unconstrained ILS theory to the nonlinearly constrained case, an extension that is particularly suited for precise attitude determination. As opposed to current practice, our method does proper justice to the a priori given information. The nonlinear baseline constraint is fully integrated into the ambiguity objective function, thereby receiving a proper weighting in its minimization and providing guidance for the integer search. Different search strategies are developed to compute exact and approximate solutions of the nonlinear constrained ILS problem. Their applicability depends on the strength of the GNSS model and on the length of the baseline. Two of the presented search strategies, a global and a local one, are based on the use of an ellipsoidal search space. This has the advantage that standard methods can be applied. The global ellipsoidal search strategy is applicable to GNSS models of sufficient strength, while the local ellipsoidal search strategy is applicable to models for which the baseline lengths are not too small. We also develop search strategies for the most challenging case, namely when the curvature of the non-ellipsoidal ambiguity search space needs to be taken into account. Two such strategies are presented, an approximate one and a rigorous, somewhat more complex, one. The approximate one is applicable when the fixed baseline variance matrix is close to diagonal. Both methods make use of a search and shrink strategy. The rigorous solution is efficiently obtained by means of a search and shrink strategy that uses non-quadratic, but easy-to-evaluate, bounding functions of the ambiguity objective function. The theory

  8. Application of new least-squares methods for the quantitative infrared analysis of multicomponent samples

    SciTech Connect

    Haaland, D.M.; Easterling, R.G.

    1982-11-01

    Improvements have been made in previous least-squares regression analyses of infrared spectra for the quantitative estimation of concentrations of multicomponent mixtures. Spectral baselines are fitted by least-squares methods, and overlapping spectral features are accounted for in the fitting procedure. Selection of peaks above a threshold value reduces computation time and data storage requirements. Four weighted least-squares methods incorporating different baseline assumptions were investigated using FT-IR spectra of the three pure xylene isomers and their mixtures. By fitting only regions of the spectra that follow Beer's Law, accurate results can be obtained using three of the fitting methods even when baselines are not corrected to zero. Accurate results can also be obtained using one of the fits even in the presence of Beer's Law deviations. This is a consequence of pooling the weighted results for each spectral peak such that the greatest weighting is automatically given to those peaks that adhere to Beer's Law. It has been shown with the xylene spectra that semiquantitative results can be obtained even when all the major components are not known or when expected components are not present. This improvement over previous methods greatly expands the utility of quantitative least-squares analyses.

  9. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

  10. Topology testing of phylogenies using least squares methods

    PubMed Central

    Czarna, Aleksandra; Sanjuán, Rafael; González-Candelas, Fernando; Wróbel, Borys

    2006-01-01

    Background The least squares (LS) method for constructing confidence sets of trees is closely related to LS tree building methods, in which the goodness of fit of the distances measured on the tree (patristic distances) to the observed distances between taxa is the criterion used for selecting the best topology. The generalized LS (GLS) method for topology testing is often frustrated by the computational difficulties in calculating the covariance matrix and its inverse, which in practice requires approximations. The weighted LS (WLS) allows for a more efficient albeit approximate calculation of the test statistic by ignoring the covariances between the distances. Results The goal of this paper is to assess the applicability of the LS approach for constructing confidence sets of trees. We show that the approximations inherent to the WLS method did not affect negatively the accuracy and reliability of the test both in the analysis of biological sequences and DNA-DNA hybridization data (for which character-based testing methods cannot be used). On the other hand, we report several problems for the GLS method, at least for the available implementation. For many data sets of biological sequences, the GLS statistic could not be calculated. For some data sets for which it could, the GLS method included all the possible trees in the confidence set despite a strong phylogenetic signal in the data. Finally, contrary to WLS, for simulated sequences GLS showed undercoverage (frequent non-inclusion of the true tree in the confidence set). Conclusion The WLS method provides a computationally efficient approximation to the GLS useful especially in exploratory analyses of confidence sets of trees, when assessing the phylogenetic signal in the data, and when other methods are not available. PMID:17150093

  11. Frequency weighted least squares reconstruction of truncated transmission SPECT data

    SciTech Connect

    Riddell, C.; Savi, A.; Gilardi, M.C.; Fazio, F.

    1996-08-01

    A least squares technique for reconstructing truncated data is presented. The method involves including the ramp filter into the system matrix. The sinogram is extrapolated with zeros for filtering but extrapolated values are not considered during backward and forward projections.The new matrix leads to a frequency weighted least squares criterion. Tikhonov regularization and a priori information are considered. For the 0-order regularization, the resulting images are biased but it is shown that this bias is recovered by a simple scaling process. First and second order regularizations provide true restoration. The technique is applied to simulated and real transmission SPECT data. Results show that the frequency least squares criterion is a valuable approach for efficiently reconstructing truncated sinograms.

  12. Analysis of total least squares in estimating the parameters of a mortar trajectory

    SciTech Connect

    Lau, D.L.; Ng, L.C.

    1994-12-01

    Least Squares (LS) is a method of curve fitting used with the assumption that error exists in the observation vector. The method of Total Least Squares (TLS) is more useful in cases where there is error in the data matrix as well as the observation vector. This paper describes work done in comparing the LS and TLS results for parameter estimation of a mortar trajectory based on a time series of angular observations. To improve the results, we investigated several derivations of the LS and TLS methods, and early findings show TLS provided slightly, 10%, improved results over the LS method.

  13. Lane detection and tracking based on improved Hough transform and least-squares method

    NASA Astrophysics Data System (ADS)

    Sun, Peng; Chen, Hui

    2014-11-01

    Lane detection and tracking play important roles in lane departure warning system (LDWS). In order to improve the real-time performance and obtain better lane detection results, an improved algorithm of lane detection and tracking based on combination of improved Hough transform and least-squares fitting method is proposed in this paper. In the image pre-processing stage, firstly a multi-gradient Sobel operator is used to obtain the edge map of road images, secondly adaptive Otsu algorithm is used to obtain binary image, and in order to meet the precision requirements of single pixel, fast parallel thinning algorithm is used to get the skeleton map of binary image. And then, lane lines are initially detected by using polar angle constraint Hough transform, which has narrowed the scope of searching. At last, during the tracking phase, based on the detection result of the previous image frame, a dynamic region of interest (ROI) is set up, and within the predicted dynamic ROI, least-squares fitting method is used to fit the lane line, which has greatly reduced the algorithm calculation. And also a failure judgment module is added in this paper to improve the detection reliability. When the least-squares fitting method is failed, the polar angle constraint Hough transform is restarted for initial detection, which has achieved a coordination of Hough transform and least-squares fitting method. The algorithm in this paper takes into account the robustness of Hough transform and the real-time performance of least-squares fitting method, and sets up a dynamic ROI for lane detection. Experimental results show that it has a good performance of lane recognition, and the average time to complete the preprocessing and lane recognition of one road map is less than 25ms, which has proved that the algorithm has good real-time performance and strong robustness.

  14. Software For Least-Squares And Robust Estimation

    NASA Technical Reports Server (NTRS)

    Jeffreys, William H.; Fitzpatrick, Michael J.; Mcarthur, Barbara E.; Mccartney, James

    1990-01-01

    GAUSSFIT computer program includes full-featured programming language facilitating creation of mathematical models solving least-squares and robust-estimation problems. Programming language designed to make it easy to specify complex reduction models. Written in 100 percent C language.

  15. Least-Squares Adaptive Control Using Chebyshev Orthogonal Polynomials

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Burken, John; Ishihara, Abraham

    2011-01-01

    This paper presents a new adaptive control approach using Chebyshev orthogonal polynomials as basis functions in a least-squares functional approximation. The use of orthogonal basis functions improves the function approximation significantly and enables better convergence of parameter estimates. Flight control simulations demonstrate the effectiveness of the proposed adaptive control approach.

  16. Least squares approximation of two-dimensional FIR digital filters

    NASA Astrophysics Data System (ADS)

    Alliney, S.; Sgallari, F.

    1980-02-01

    In this paper, a new method for the synthesis of two-dimensional FIR digital filters is presented. The method is based on a least-squares approximation of the ideal frequency response; an orthogonality property of certain functions, related to the frequency sampling design, improves the computational efficiency.

  17. Least-squares finite element methods for compressible Euler equations

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Carey, G. F.

    1990-01-01

    A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.

  18. Least-squares finite element methods for quantum chromodynamics

    SciTech Connect

    Ketelsen, Christian; Brannick, J; Manteuffel, T; Mccormick, S

    2008-01-01

    A significant amount of the computational time in large Monte Carlo simulations of lattice quantum chromodynamics (QCD) is spent inverting the discrete Dirac operator. Unfortunately, traditional covariant finite difference discretizations of the Dirac operator present serious challenges for standard iterative methods. For interesting physical parameters, the discretized operator is large and ill-conditioned, and has random coefficients. More recently, adaptive algebraic multigrid (AMG) methods have been shown to be effective preconditioners for Wilson's discretization of the Dirac equation. This paper presents an alternate discretization of the Dirac operator based on least-squares finite elements. The discretization is systematically developed and physical properties of the resulting matrix system are discussed. Finally, numerical experiments are presented that demonstrate the effectiveness of adaptive smoothed aggregation ({alpha}SA ) multigrid as a preconditioner for the discrete field equations resulting from applying the proposed least-squares FE formulation to a simplified test problem, the 2d Schwinger model of quantum electrodynamics.

  19. Anisotropy minimization via least squares method for transformation optics.

    PubMed

    Junqueira, Mateus A F C; Gabrielli, Lucas H; Spadoti, Danilo H

    2014-07-28

    In this work the least squares method is used to reduce anisotropy in transformation optics technique. To apply the least squares method a power series is added on the coordinate transformation functions. The series coefficients were calculated to reduce the deviations in Cauchy-Riemann equations, which, when satisfied, result in both conformal transformations and isotropic media. We also present a mathematical treatment for the special case of transformation optics to design waveguides. To demonstrate the proposed technique a waveguide with a 30° of bend and with a 50% of increase in its output width was designed. The results show that our technique is simultaneously straightforward to be implement and effective in reducing the anisotropy of the transformation for an extremely low value close to zero.

  20. Least-squares finite elements for Stokes problem

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Chang, C. L.

    1988-01-01

    A least-squares method based on the first-order velocity-pressure-vorticity formulation for the Stokes problem is proposed. This method leads to a minimization problem rather than to a saddle-point problem. The choice of the combinations of elements is thus not subject to the Ladyzhenskaya-Babuska-Brezzi (LBB) condition. Numerical results are given for the optimal rate of convergence for equal-order interpolations.

  1. A new least-squares transport equation compatible with voids

    SciTech Connect

    Hansen, J. B.; Morel, J. E.

    2013-07-01

    We define a new least-squares transport equation that is applicable in voids, can be solved using source iteration with diffusion-synthetic acceleration, and requires only the solution of an independent set of second-order self-adjoint equations for each direction during each source iteration. We derive the equation, discretize it using the S{sub n} method in conjunction with a linear-continuous finite-element method in space, and computationally demonstrate various of its properties. (authors)

  2. Compressible flow calculations employing the Galerkin/least-squares method

    NASA Technical Reports Server (NTRS)

    Shakib, F.; Hughes, T. J. R.; Johan, Zdenek

    1989-01-01

    A multielement group, domain decomposition algorithm is presented for solving linear nonsymmetric systems arising in the finite-element analysis of compressible flows employing the Galerkin/least-squares method. The iterative strategy employed is based on the generalized minimum residual (GMRES) procedure originally proposed by Saad and Shultz. Two levels of preconditioning are investigated. Applications to problems of high-speed compressible flow illustrate the effectiveness of the scheme.

  3. Least-squares finite element method for fluid dynamics

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Povinelli, Louis A.

    1989-01-01

    An overview is given of new developments of the least squares finite element method (LSFEM) in fluid dynamics. Special emphasis is placed on the universality of LSFEM; the symmetry and positiveness of the algebraic systems obtained from LSFEM; the accommodation of LSFEM to equal order interpolations for incompressible viscous flows; and the natural numerical dissipation of LSFEM for convective transport problems and high speed compressible flows. The performance of LSFEM is illustrated by numerical examples.

  4. Preprocessing Inconsistent Linear System for a Meaningful Least Squares Solution

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; Shaykhian, Gholam Ali

    2011-01-01

    Mathematical models of many physical/statistical problems are systems of linear equations. Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.

  5. Multilevel first-order system least squares for PDEs

    SciTech Connect

    McCormick, S.

    1994-12-31

    The purpose of this talk is to analyze the least-squares finite element method for second-order convection-diffusion equations written as a first-order system. In general, standard Galerkin finite element methods applied to non-self-adjoint elliptic equations with significant convection terms exhibit a variety of deficiencies, including oscillations or nonmonotonicity of the solution and poor approximation of its derivatives, A variety of stabilization techniques, such as up-winding, Petrov-Galerkin, and stream-line diffusion approximations, have been introduced to eliminate these and other drawbacks of standard Galerkin methods. Yet, although significant progress has been made, convection-diffusion problems remain among the more difficult problems to solve numerically. The first-order system least-squares approach promises to overcome these deficiencies. This talk develops ellipticity estimates and discretization error bounds for elliptic equations (with lower order terms) that are reformulated as a least-squares problem for an equivalent first-order system. The main results are the proofs of ellipticity and optimal convergence of multiplicative and additive solvers of the discrete systems.

  6. Orthogonal basis functions in discrete least-squares rational approximation

    NASA Astrophysics Data System (ADS)

    Bultheel, A.; van Barel, M.; van Gucht, P.

    2004-03-01

    We consider a problem that arises in the field of frequency domain system identification. If a discrete-time system has an input-output relation Y(z)=G(z)U(z), with transfer function G, then the problem is to find a rational approximation for G. The data given are measurements of input and output spectra in the frequency points zk: {U(zk),Y(zk)}k=1N together with some weight. The approximation criterion is to minimize the weighted discrete least squares norm of the vector obtained by evaluating in the measurement points. If the poles of the system are fixed, then the problem reduces to a linear least-squares problem in two possible ways: by multiplying out the denominators and hide these in the weight, which leads to the construction of orthogonal vector polynomials, or the problem can be solved directly using an orthogonal basis of rational functions. The orthogonality of the basis is important because if the transfer function is represented with respect to a nonorthogonal basis, then this least-squares problem can be very ill conditioned. Even if an orthogonal basis is used, but with respect to the wrong inner product (e.g., the Lebesgue measure on the unit circle) numerical instability can be fatal in practice. We show that both approaches lead to an inverse eigenvalue problem, which forms the common framework in which fast and numerically stable algorithms can be designed for the computation of the orthonormal basis.

  7. Recursive least-squares learning algorithms for neural networks

    SciTech Connect

    Lewis, P.S. ); Hwang, Jenq-Neng . Dept. of Electrical Engineering)

    1990-01-01

    This paper presents the development of a pair of recursive least squares (RLS) algorithms for online training of multilayer perceptrons, which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation, either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is in the order of (N{sup 2}), where N is the number of network parameters. This is due to the estimation of the N {times} N inverse Hessian matrix. Less computationally intensive approximations of the RLS algorithms can be easily derived by using only block diagonal elements of this matrix, thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example, RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6331). 14 refs., 3 figs.

  8. Solving linear inequalities in a least squares sense

    SciTech Connect

    Bramley, R.; Winnicka, B.

    1994-12-31

    Let A {element_of} {Re}{sup mxn} be an arbitrary real matrix, and let b {element_of} {Re}{sup m} a given vector. A familiar problem in computational linear algebra is to solve the system Ax = b in a least squares sense; that is, to find an x* minimizing {parallel}Ax {minus} b{parallel}, where {parallel} {center_dot} {parallel} refers to the vector two-norm. Such an x* solves the normal equations A{sup T}(Ax {minus} b) = 0, and the optimal residual r* = b {minus} Ax* is unique (although x* need not be). The least squares problem is usually interpreted as corresponding to multiple observations, represented by the rows of A and b, on a vector of data x. The observations may be inconsistent, and in this case a solution is sought that minimizes the norm of the residuals. A less familiar problem to numerical linear algebraists is the solution of systems of linear inequalities Ax {le} b in a least squares sense, but the motivation is similar: if a set of observations places upper or lower bounds on linear combinations of variables, the authors want to find x* minimizing {parallel} (Ax {minus} b){sub +} {parallel}, where the i{sup th} component of the vector v{sub +} is the maximum of zero and the i{sup th} component of v.

  9. A note on the total least squares problem for coplanar points

    SciTech Connect

    Lee, S.L.

    1994-09-01

    The Total Least Squares (TLS) fit to the points (x{sub k}, y{sub k}), k = 1, {hor_ellipsis}, n, minimizes the sum of the squares of the perpendicular distances from the points to the line. This sum is the TLS error, and minimizing its magnitude is appropriate if x{sub k} and y{sub k} are uncertain. A priori formulas for the TLS fit and TLS error to coplanar points were originally derived by Pearson, and they are expressed in terms of the mean, standard deviation and correlation coefficient of the data. In this note, these TLS formulas are derived in a more elementary fashion. The TLS fit is obtained via the ordinary least squares problem and the algebraic properties of complex numbers. The TLS error is formulated in terms of the triangle inequality for complex numbers.

  10. Multivariat least-squares methods applied to the quantitative spectral analysis of multicomponent samples

    SciTech Connect

    Haaland, D.M.; Easterling, R.G.; Vopicka, D.A.

    1985-01-01

    In an extension of earlier work, weighted multivariate least-squares methods of quantitative FT-IR analysis have been developed. A linear least-squares approximation to nonlinearities in the Beer-Lambert law is made by allowing the reference spectra to be a set of known mixtures, The incorporation of nonzero intercepts in the relation between absorbance and concentration further improves the approximation of nonlinearities while simultaneously accounting for nonzero spectra baselines. Pathlength variations are also accommodated in the analysis, and under certain conditions, unknown sample pathlengths can be determined. All spectral data are used to improve the precision and accuracy of the estimated concentrations. During the calibration phase of the analysis, pure component spectra are estimated from the standard mixture spectra. These can be compared with the measured pure component spectra to determine which vibrations experience nonlinear behavior. In the predictive phase of the analysis, the calculated spectra are used in our previous least-squares analysis to estimate sample component concentrations. These methods were applied to the analysis of the IR spectra of binary mixtures of esters. Even with severely overlapping spectral bands and nonlinearities in the Beer-Lambert law, the average relative error in the estimated concentration was <1%.

  11. Combined genetic algorithm optimization and regularized orthogonal least squares learning for radial basis function networks.

    PubMed

    Chen, S; Wu, Y; Luk, B L

    1999-01-01

    The paper presents a two-level learning method for radial basis function (RBF) networks. A regularized orthogonal least squares (ROLS) algorithm is employed at the lower level to construct RBF networks while the two key learning parameters, the regularization parameter and the RBF width, are optimized using a genetic algorithm (GA) at the upper level. Nonlinear time series modeling and prediction is used as an example to demonstrate the effectiveness of this hierarchical learning approach.

  12. Symbolic Givens reduction in large sparse least squares problems

    SciTech Connect

    Ostrouchov, G.

    1984-12-01

    Orthogonal Givens factorization is a popular method for solving large sparse least squares problems. In order to exploit sparsity and to use a fixed data structure in Givens reduction, a preliminary symbolic factorization needs to be performed. Some results on row-ordering and structure of rows in a partially reduced matrix are obtained using a graph-theoretic representation. These results provide a basis for a symbolic Givens factorization. Column-ordering is also discussed, and an algorithm for symbolic Givens reduction is developed and tested.

  13. Robust inverse kinematics using damped least squares with dynamic weighting

    NASA Technical Reports Server (NTRS)

    Schinstock, D. E.; Faddis, T. N.; Greenway, R. B.

    1994-01-01

    This paper presents a general method for calculating the inverse kinematics with singularity and joint limit robustness for both redundant and non-redundant serial-link manipulators. Damped least squares inverse of the Jacobian is used with dynamic weighting matrices in approximating the solution. This reduces specific joint differential vectors. The algorithm gives an exact solution away from the singularities and joint limits, and an approximate solution at or near the singularities and/or joint limits. The procedure is here implemented for a six d.o.f. teleoperator and a well behaved slave manipulator resulted under teleoperational control.

  14. The extended-least-squares treatment of correlated data

    SciTech Connect

    Cohen, E.R.; Tuninsky, V.S.

    1994-12-31

    A generalization of the extended-least-squares algorithms for the case of correlated discrepant data is given. The expressions of the linear, unbiased, minimum-variance estimators (LUMVE) derived before are reformulated. A posteriori estimates of the variance taking into account the inconsistency of all of the experimental data, have the same form as for the case of non-correlated data. These estimates extend the previous improvement on the {open_quotes}traditional{close_quotes} Birge-ratio procedures to the case of correlated input data.

  15. Recursive least squares estimation and Kalman filtering by systolic arrays

    NASA Technical Reports Server (NTRS)

    Chen, M. J.; Yao, K.

    1988-01-01

    One of the most promising new directions for high-throughput-rate problems is that based on systolic arrays. In this paper, using the matrix-decomposition approach, a systolic Kalman filter is formulated as a modified square-root information filter consisting of a whitening filter followed by a simple least-squares operation based on the systolic QR algorithm. By proper skewing of the input data, a fully pipelined time and measurement update systolic Kalman filter can be achieved with O(n squared) processing cells, resulting in a system throughput rate of O (n).

  16. Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression

    PubMed Central

    Chen, Yanguang

    2016-01-01

    In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson’s statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran’s index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China’s regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test. PMID:26800271

  17. Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression.

    PubMed

    Chen, Yanguang

    2016-01-01

    In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson's statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran's index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China's regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test.

  18. Application of Least-Squares Adjustment Technique to Geometric Camera Calibration and Photogrammetric Flow Visualization

    NASA Technical Reports Server (NTRS)

    Chen, Fang-Jenq

    1997-01-01

    Flow visualization produces data in the form of two-dimensional images. If the optical components of a camera system are perfect, the transformation equations between the two-dimensional image and the three-dimensional object space are linear and easy to solve. However, real camera lenses introduce nonlinear distortions that affect the accuracy of transformation unless proper corrections are applied. An iterative least-squares adjustment algorithm is developed to solve the nonlinear transformation equations incorporated with distortion corrections. Experimental applications demonstrate that a relative precision on the order of 40,000 is achievable without tedious laboratory calibrations of the camera.

  19. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    SciTech Connect

    Wang, Qiqi Hu, Rui Blonigan, Patrick

    2014-06-15

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

  20. Random errors in interferometry with the least-squares method

    SciTech Connect

    Wang Qi

    2011-01-20

    This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships have also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.

  1. Flow Applications of the Least Squares Finite Element Method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan

    1998-01-01

    The main thrust of the effort has been towards the development, analysis and implementation of the least-squares finite element method (LSFEM) for fluid dynamics and electromagnetics applications. In the past year, there were four major accomplishments: 1) special treatments in computational fluid dynamics and computational electromagnetics, such as upwinding, numerical dissipation, staggered grid, non-equal order elements, operator splitting and preconditioning, edge elements, and vector potential are unnecessary; 2) the analysis of the LSFEM for most partial differential equations can be based on the bounded inverse theorem; 3) the finite difference and finite volume algorithms solve only two Maxwell equations and ignore the divergence equations; and 4) the first numerical simulation of three-dimensional Marangoni-Benard convection was performed using the LSFEM.

  2. Intelligent Quality Prediction Using Weighted Least Square Support Vector Regression

    NASA Astrophysics Data System (ADS)

    Yu, Yaojun

    A novel quality prediction method with mobile time window is proposed for small-batch producing process based on weighted least squares support vector regression (LS-SVR). The design steps and learning algorithm are also addressed. In the method, weighted LS-SVR is taken as the intelligent kernel, with which the small-batch learning is solved well and the nearer sample is set a larger weight, while the farther is set the smaller weight in the history data. A typical machining process of cutting bearing outer race is carried out and the real measured data are used to contrast experiment. The experimental results demonstrate that the prediction accuracy of the weighted LS-SVR based model is only 20%-30% that of the standard LS-SVR based one in the same condition. It provides a better candidate for quality prediction of small-batch producing process.

  3. Moving least-squares reconstruction of large models with GPUs.

    PubMed

    Merry, Bruce; Gain, James; Marais, Patrick

    2014-02-01

    Modern laser range scanning campaigns produce extremely large point clouds, and reconstructing a triangulated surface thus requires both out-of-core techniques and significant computational power. We present a GPU-accelerated implementation of the moving least-squares (MLS) surface reconstruction technique. We believe this to be the first GPU-accelerated, out-of-core implementation of surface reconstruction that is suitable for laser range-scanned data. While several previous out-of-core approaches use a sweep-plane approach, we subdivide the space into cubic regions that are processed independently. This independence allows the algorithm to be parallelized using multiple GPUs, either in a single machine or a cluster. It also allows data sets with billions of point samples to be processed on a standard desktop PC. We show that our implementation is an order of magnitude faster than a CPU-based implementation when using a single GPU, and scales well to 8 GPUs.

  4. Moving Least-Squares Reconstruction of Large Models with GPUs.

    PubMed

    Merry, Bruce; Gain, James; Marais, Patrick

    2013-09-01

    Modern laser range scanning campaigns produce extremely large point clouds, and reconstructing a triangulated surface thus requires both out-of-core techniques and significant computational power. We present a GPU-accelerated implementation of the Moving Least Squares (MLS) surface reconstruction technique. While several previous out-of-core approaches use a sweep-plane approach, we subdivide the space into cubic regions that are processed independently. This independence allows the algorithm to be parallelized using multiple GPUs, either in a single machine or a cluster. It also allows data sets with billions of point samples to be processed on a standard desktop PC. We show that our implementation is an order of magnitude faster than a CPU-based implementation when using a single GPU, and scales well to 8 GPUs.

  5. A Galerkin least squares approach to viscoelastic flow.

    SciTech Connect

    Rao, Rekha R.; Schunk, Peter Randall

    2015-10-01

    A Galerkin/least-squares stabilization technique is applied to a discrete Elastic Viscous Stress Splitting formulation of for viscoelastic flow. From this, a possible viscoelastic stabilization method is proposed. This method is tested with the flow of an Oldroyd-B fluid past a rigid cylinder, where it is found to produce inaccurate drag coefficients. Furthermore, it fails for relatively low Weissenberg number indicating it is not suited for use as a general algorithm. In addition, a decoupled approach is used as a way separating the constitutive equation from the rest of the system. A Pressure Poisson equation is used when the velocity and pressure are sought to be decoupled, but this fails to produce a solution when inflow/outflow boundaries are considered. However, a coupled pressure-velocity equation with a decoupled constitutive equation is successful for the flow past a rigid cylinder and seems to be suitable as a general-use algorithm.

  6. Specular surface measurement by using least squares light tracking technique

    NASA Astrophysics Data System (ADS)

    Guo, Hongwei; Feng, Peng; Tao, Tao

    2010-02-01

    This paper presents a novel technique for measuring the three-dimensional (3-D) shapes of specular surfaces. In this technique, an LCD monitor plane is used as the diffusive light source. It can be vertically moved to two or more different positions. At each position, sinusoid fringe patterns are displayed on the LCD plane and reflected by the measured specular surface. The distorted fringe patterns are captured with a camera, so that the phase distributions at these positions are measured. From the phases, the locus of the incident ray for each pixel is determined in the least squares sense, and further the three-dimensional shape of the specular surface is reconstructed. In so doing, the restrictions and limitations of the existing techniques in computational complexities, phase ambiguities, and error accumulations are eliminated. The validity of this technique has been demonstrated by experimental results.

  7. RNA structural motif recognition based on least-squares distance.

    PubMed

    Shen, Ying; Wong, Hau-San; Zhang, Shaohong; Zhang, Lin

    2013-09-01

    RNA structural motifs are recurrent structural elements occurring in RNA molecules. RNA structural motif recognition aims to find RNA substructures that are similar to a query motif, and it is important for RNA structure analysis and RNA function prediction. In view of this, we propose a new method known as RNA Structural Motif Recognition based on Least-Squares distance (LS-RSMR) to effectively recognize RNA structural motifs. A test set consisting of five types of RNA structural motifs occurring in Escherichia coli ribosomal RNA is compiled by us. Experiments are conducted for recognizing these five types of motifs. The experimental results fully reveal the superiority of the proposed LS-RSMR compared with four other state-of-the-art methods.

  8. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2016-01-01

    A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

  9. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    DOEpatents

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  10. Recursive least square vehicle mass estimation based on acceleration partition

    NASA Astrophysics Data System (ADS)

    Feng, Yuan; Xiong, Lu; Yu, Zhuoping; Qu, Tong

    2014-05-01

    Vehicle mass is an important parameter in vehicle dynamics control systems. Although many algorithms have been developed for the estimation of mass, none of them have yet taken into account the different types of resistance that occur under different conditions. This paper proposes a vehicle mass estimator. The estimator incorporates road gradient information in the longitudinal accelerometer signal, and it removes the road grade from the longitudinal dynamics of the vehicle. Then, two different recursive least square method (RLSM) schemes are proposed to estimate the driving resistance and the mass independently based on the acceleration partition under different conditions. A 6 DOF dynamic model of four In-wheel Motor Vehicle is built to assist in the design of the algorithm and in the setting of the parameters. The acceleration limits are determined to not only reduce the estimated error but also ensure enough data for the resistance estimation and mass estimation in some critical situations. The modification of the algorithm is also discussed to improve the result of the mass estimation. Experiment data on a sphalt road, plastic runway, and gravel road and on sloping roads are used to validate the estimation algorithm. The adaptability of the algorithm is improved by using data collected under several critical operating conditions. The experimental results show the error of the estimation process to be within 2.6%, which indicates that the algorithm can estimate mass with great accuracy regardless of the road surface and gradient changes and that it may be valuable in engineering applications. This paper proposes a recursive least square vehicle mass estimation method based on acceleration partition.

  11. Useful and little-known applications of the Least Square Method and some consequences of covariances

    NASA Astrophysics Data System (ADS)

    Helene, Otaviano; Mariano, Leandro; Guimarães-Filho, Zwinglio

    2016-10-01

    Covariances are as important as variances when dealing with experimental data and they must be considered in fitting procedures and adjustments in order to preserve the statistical properties of the adjusted quantities. In this paper, we apply the Least Square Method in matrix form to several simple problems in order to evaluate the consequences of covariances in the fitting procedure. Among the examples, we demonstrate how a measurement of a physical quantity can change the adopted value of all other covariant quantities and how a new single point (x , y) improves the parameters of a previously adjusted straight-line.

  12. Weighted least-squares algorithm for phase unwrapping based on confidence level in frequency domain

    NASA Astrophysics Data System (ADS)

    Wang, Shaohua; Yu, Jie; Yang, Cankun; Jiao, Shuai; Fan, Jun; Wan, Yanyan

    2015-12-01

    Phase unwrapping is a key step in InSAR (Synthetic Aperture Radar Interferometry) processing, and its result may directly affect the accuracy of DEM (Digital Elevation Model) and ground deformation. However, the decoherence phenomenon such as shadows and layover, in the area of severe land subsidence where the terrain is steep and the slope changes greatly, will cause error transmission in the differential wrapped phase information, leading to inaccurate unwrapping phase. In order to eliminate the effect of the noise and reduce the effect of less sampling which caused by topographical factors, a weighted least-squares method based on confidence level in frequency domain is used in this study. This method considered to express the terrain slope in the interferogram as the partial phase frequency in range and azimuth direction, then integrated them into the confidence level. The parameter was used as the constraints of the nonlinear least squares phase unwrapping algorithm, to smooth the un-requirements unwrapped phase gradient and improve the accuracy of phase unwrapping. Finally, comparing with interferometric data of the Beijing subsidence area obtained from TerraSAR verifies that the algorithm has higher accuracy and stability than the normal weighted least-square phase unwrapping algorithms, and could consider to terrain factors.

  13. A least-squares computational ``tool kit``. Nuclear data and measurements series

    SciTech Connect

    Smith, D.L.

    1993-04-01

    The information assembled in this report is intended to offer a useful computational ``tool kit`` to individuals who are interested in a variety of practical applications for the least-squares method of parameter estimation. The fundamental principles of Bayesian analysis are outlined first and these are applied to development of both the simple and the generalized least-squares conditions. Formal solutions that satisfy these conditions are given subsequently. Their application to both linear and non-linear problems is described in detail. Numerical procedures required to implement these formal solutions are discussed and two utility computer algorithms are offered for this purpose (codes LSIOD and GLSIOD written in FORTRAN). Some simple, easily understood examples are included to illustrate the use of these algorithms. Several related topics are then addressed, including the generation of covariance matrices, the role of iteration in applications of least-squares procedures, the effects of numerical precision and an approach that can be pursued in developing data analysis packages that are directed toward special applications.

  14. Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).

    PubMed

    Bevilacqua, Marta; Marini, Federico

    2014-08-01

    The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks).

  15. Least-squares joint imaging of multiples and primaries

    NASA Astrophysics Data System (ADS)

    Brown, Morgan Parker

    Current exploration geophysics practice still regards multiple reflections as noise, although multiples often contain considerable information about the earth's angle-dependent reflectivity that primary reflections do not. To exploit this information, multiples and primaries must be combined in a domain in which they are comparable, such as in the prestack image domain. However, unless the multiples and primaries have been pre-separated from the data, crosstalk leakage between multiple and primary images will significantly degrade any gains in the signal fidelity, geologic interpretability, and signal-to-noise ratio of the combined image. I present a global linear least-squares algorithm, denoted LSJIMP (Least-squares Joint Imaging of Multiples and Primaries), which separates multiples from primaries while simultaneously combining their information. The novelty of the method lies in the three model regularization operators which discriminate between crosstalk and signal and extend information between multiple and primary images. The LSJIMP method exploits the hitherto ignored redundancy between primaries and multiples in the data. While many different types of multiple imaging operators are well-suited for use with the LSJIMP method, in this thesis I utilize an efficient prestack time imaging strategy for multiples which sacrifices accuracy in a complex earth for computational speed and convenience. I derive a variant of the normal moveout (NMO) equation for multiples, called HEMNO, which can image "split" pegleg multiples which arise from a moderately heterogeneous earth. I also derive a series of prestack amplitude compensation operators which when combined with HEMNO, transform pegleg multiples into events are directly comparable---kinematically and in terms of amplitudes---to the primary reflection. I test my implementation of LSJIMP on two datasets from the deepwater Gulf of Mexico. The first, a 2-D line in the Mississippi Canyon region, exhibits a variety of

  16. Least-Squares Neutron Spectral Adjustment with STAYSL PNNL

    NASA Astrophysics Data System (ADS)

    Greenwood, L. R.; Johnson, C. D.

    2016-02-01

    The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002) [3], which was sponsored by the International Atomic Energy Agency (IAEA), though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF) [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up) within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the SigPhi Calculator

  17. Optimization of Parameter Selection for Partial Least Squares Model Development

    NASA Astrophysics Data System (ADS)

    Zhao, Na; Wu, Zhi-Sheng; Zhang, Qiao; Shi, Xin-Yuan; Ma, Qun; Qiao, Yan-Jiang

    2015-07-01

    In multivariate calibration using a spectral dataset, it is difficult to optimize nonsystematic parameters in a quantitative model, i.e., spectral pretreatment, latent factors and variable selection. In this study, we describe a novel and systematic approach that uses a processing trajectory to select three parameters including different spectral pretreatments, variable importance in the projection (VIP) for variable selection and latent factors in the Partial Least-Square (PLS) model. The root mean square errors of calibration (RMSEC), the root mean square errors of prediction (RMSEP), the ratio of standard error of prediction to standard deviation (RPD), and the determination coefficient of calibration (Rcal2) and validation (Rpre2) were simultaneously assessed to optimize the best modeling path. We used three different near-infrared (NIR) datasets, which illustrated that there was more than one modeling path to ensure good modeling. The PLS model optimizes modeling parameters step-by-step, but the robust model described here demonstrates better efficiency than other published papers.

  18. Non-parametric and least squares Langley plot methods

    NASA Astrophysics Data System (ADS)

    Kiedron, P. W.; Michalsky, J. J.

    2015-04-01

    Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V=V>/i>0e-τ ·m, where a plot of ln (V) voltage vs. m air mass yields a straight line with intercept ln (V0). This ln (V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The eleven techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln (V0)'s are smoothed and interpolated with median and mean moving window filters.

  19. Thermal-to-visible face recognition using partial least squares.

    PubMed

    Hu, Shuowen; Choi, Jonghyun; Chan, Alex L; Schwartz, William Robson

    2015-03-01

    Although visible face recognition has been an active area of research for several decades, cross-modal face recognition has only been explored by the biometrics community relatively recently. Thermal-to-visible face recognition is one of the most difficult cross-modal face recognition challenges, because of the difference in phenomenology between the thermal and visible imaging modalities. We address the cross-modal recognition problem using a partial least squares (PLS) regression-based approach consisting of preprocessing, feature extraction, and PLS model building. The preprocessing and feature extraction stages are designed to reduce the modality gap between the thermal and visible facial signatures, and facilitate the subsequent one-vs-all PLS-based model building. We incorporate multi-modal information into the PLS model building stage to enhance cross-modal recognition. The performance of the proposed recognition algorithm is evaluated on three challenging datasets containing visible and thermal imagery acquired under different experimental scenarios: time-lapse, physical tasks, mental tasks, and subject-to-camera range. These scenarios represent difficult challenges relevant to real-world applications. We demonstrate that the proposed method performs robustly for the examined scenarios. PMID:26366654

  20. Robustness of ordinary least squares in randomized clinical trials.

    PubMed

    Judkins, David R; Porter, Kristin E

    2016-05-20

    There has been a series of occasional papers in this journal about semiparametric methods for robust covariate control in the analysis of clinical trials. These methods are fairly easy to apply on currently available computers, but standard software packages do not yet support these methods with easy option selections. Moreover, these methods can be difficult to explain to practitioners who have only a basic statistical education. There is also a somewhat neglected history demonstrating that ordinary least squares (OLS) is very robust to the types of outcome distribution features that have motivated the newer methods for robust covariate control. We review these two strands of literature and report on some new simulations that demonstrate the robustness of OLS to more extreme normality violations than previously explored. The new simulations involve two strongly leptokurtic outcomes: near-zero binary outcomes and zero-inflated gamma outcomes. Potential examples of such outcomes include, respectively, 5-year survival rates for stage IV cancer and healthcare claim amounts for rare conditions. We find that traditional OLS methods work very well down to very small sample sizes for such outcomes. Under some circumstances, OLS with robust standard errors work well with even smaller sample sizes. Given this literature review and our new simulations, we think that most researchers may comfortably continue using standard OLS software, preferably with the robust standard errors.

  1. Battery state-of-charge estimation using approximate least squares

    NASA Astrophysics Data System (ADS)

    Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.

    2015-03-01

    In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.

  2. Non-parametric and least squares Langley plot methods

    NASA Astrophysics Data System (ADS)

    Kiedron, P. W.; Michalsky, J. J.

    2016-01-01

    Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V = V0e-τ ṡ m, where a plot of ln(V) voltage vs. m air mass yields a straight line with intercept ln(V0). This ln(V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The 11 techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln(V0)'s are smoothed and interpolated with median and mean moving window filters.

  3. Curve-skeleton extraction using iterative least squares optimization.

    PubMed

    Wang, Yu-Shuen; Lee, Tong-Yee

    2008-01-01

    A curve skeleton is a compact representation of 3D objects and has numerous applications. It can be used to describe an object's geometry and topology. In this paper, we introduce a novel approach for computing curve skeletons for volumetric representations of the input models. Our algorithm consists of three major steps: 1) using iterative least squares optimization to shrink models and, at the same time, preserving their geometries and topologies, 2) extracting curve skeletons through the thinning algorithm, and 3) pruning unnecessary branches based on shrinking ratios. The proposed method is less sensitive to noise on the surface of models and can generate smoother skeletons. In addition, our shrinking algorithm requires little computation, since the optimization system can be factorized and stored in the pre-computational step. We demonstrate several extracted skeletons that help evaluate our algorithm. We also experimentally compare the proposed method with other well-known methods. Experimental results show advantages when using our method over other techniques. PMID:18467765

  4. Thermal-to-visible face recognition using partial least squares.

    PubMed

    Hu, Shuowen; Choi, Jonghyun; Chan, Alex L; Schwartz, William Robson

    2015-03-01

    Although visible face recognition has been an active area of research for several decades, cross-modal face recognition has only been explored by the biometrics community relatively recently. Thermal-to-visible face recognition is one of the most difficult cross-modal face recognition challenges, because of the difference in phenomenology between the thermal and visible imaging modalities. We address the cross-modal recognition problem using a partial least squares (PLS) regression-based approach consisting of preprocessing, feature extraction, and PLS model building. The preprocessing and feature extraction stages are designed to reduce the modality gap between the thermal and visible facial signatures, and facilitate the subsequent one-vs-all PLS-based model building. We incorporate multi-modal information into the PLS model building stage to enhance cross-modal recognition. The performance of the proposed recognition algorithm is evaluated on three challenging datasets containing visible and thermal imagery acquired under different experimental scenarios: time-lapse, physical tasks, mental tasks, and subject-to-camera range. These scenarios represent difficult challenges relevant to real-world applications. We demonstrate that the proposed method performs robustly for the examined scenarios.

  5. Suppressing Anomalous Localized Waffle Behavior in Least Squares Wavefront Reconstructors

    SciTech Connect

    Gavel, D

    2002-10-08

    A major difficulty with wavefront slope sensors is their insensitivity to certain phase aberration patterns, the classic example being the waffle pattern in the Fried sampling geometry. As the number of degrees of freedom in AO systems grows larger, the possibility of troublesome waffle-like behavior over localized portions of the aperture is becoming evident. Reconstructor matrices have associated with them, either explicitly or implicitly, an orthogonal mode space over which they operate, called the singular mode space. If not properly preconditioned, the reconstructor's mode set can consist almost entirely of modes that each have some localized waffle-like behavior. In this paper we analyze the behavior of least-squares reconstructors with regard to their mode spaces. We introduce a new technique that is successful in producing a mode space that segregates the waffle-like behavior into a few ''high order'' modes, which can then be projected out of the reconstructor matrix. This technique can be adapted so as to remove any specific modes that are undesirable in the final reconstructor (such as piston, tip, and tilt for example) as well as suppress (the more nebulously defined) localized waffle behavior.

  6. River flow time series using least squares support vector machines

    NASA Astrophysics Data System (ADS)

    Samsudin, R.; Saad, P.; Shabri, A.

    2011-06-01

    This paper proposes a novel hybrid forecasting model known as GLSSVM, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM). The GMDH is used to determine the useful input variables which work as the time series forecasting for the LSSVM model. Monthly river flow data from two stations, the Selangor and Bernam rivers in Selangor state of Peninsular Malaysia were taken into consideration in the development of this hybrid model. The performance of this model was compared with the conventional artificial neural network (ANN) models, Autoregressive Integrated Moving Average (ARIMA), GMDH and LSSVM models using the long term observations of monthly river flow discharge. The root mean square error (RMSE) and coefficient of correlation (R) are used to evaluate the models' performances. In both cases, the new hybrid model has been found to provide more accurate flow forecasts compared to the other models. The results of the comparison indicate that the new hybrid model is a useful tool and a promising new method for river flow forecasting.

  7. Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2012-01-01

    A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed

  8. Fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1986-01-01

    A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.

  9. Fast Dating Using Least-Squares Criteria and Algorithms.

    PubMed

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that

  10. Fast Dating Using Least-Squares Criteria and Algorithms.

    PubMed

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that

  11. Fast Dating Using Least-Squares Criteria and Algorithms

    PubMed Central

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley–Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley–Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to

  12. Finding a Minimally Informative Dirichlet Prior Distribution Using Least Squares

    SciTech Connect

    Dana Kelly; Corwin Atwood

    2011-03-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straight-forward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in closed form, and so an approximate beta distribution is used in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial aleatory model for common-cause failure, must be estimated from data that is often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  13. Finding A Minimally Informative Dirichlet Prior Using Least Squares

    SciTech Connect

    Dana Kelly

    2011-03-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson \\lambda, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  14. Robust spectrotemporal decomposition by iteratively reweighted least squares.

    PubMed

    Ba, Demba; Babadi, Behtash; Purdon, Patrick L; Brown, Emery N

    2014-12-16

    Classical nonparametric spectral analysis uses sliding windows to capture the dynamic nature of most real-world time series. This universally accepted approach fails to exploit the temporal continuity in the data and is not well-suited for signals with highly structured time-frequency representations. For a time series whose time-varying mean is the superposition of a small number of oscillatory components, we formulate nonparametric batch spectral analysis as a Bayesian estimation problem. We introduce prior distributions on the time-frequency plane that yield maximum a posteriori (MAP) spectral estimates that are continuous in time yet sparse in frequency. Our spectral decomposition procedure, termed spectrotemporal pursuit, can be efficiently computed using an iteratively reweighted least-squares algorithm and scales well with typical data lengths. We show that spectrotemporal pursuit works by applying to the time series a set of data-derived filters. Using a link between Gaussian mixture models, l1 minimization, and the expectation-maximization algorithm, we prove that spectrotemporal pursuit converges to the global MAP estimate. We illustrate our technique on simulated and real human EEG data as well as on human neural spiking activity recorded during loss of consciousness induced by the anesthetic propofol. For the EEG data, our technique yields significantly denoised spectral estimates that have significantly higher time and frequency resolution than multitaper spectral estimates. For the neural spiking data, we obtain a new spectral representation of neuronal firing rates. Spectrotemporal pursuit offers a robust spectral decomposition framework that is a principled alternative to existing methods for decomposing time series into a small number of smooth oscillatory components.

  15. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  16. The moving-least-squares-particle hydrodynamics method (MLSPH)

    SciTech Connect

    Dilts, G.

    1997-12-31

    An enhancement of the smooth-particle hydrodynamics (SPH) method has been developed using the moving-least-squares (MLS) interpolants of Lancaster and Salkauskas which simultaneously relieves the method of several well-known undesirable behaviors, including spurious boundary effects, inaccurate strain and rotation rates, pressure spikes at impact boundaries, and the infamous tension instability. The classical SPH method is derived in a novel manner by means of a Galerkin approximation applied to the Lagrangian equations of motion for continua using as basis functions the SPH kernel function multiplied by the particle volume. This derivation is then modified by simply substituting the MLS interpolants for the SPH Galerkin basis, taking care to redefine the particle volume and mass appropriately. The familiar SPH kernel approximation is now equivalent to a colocation-Galerkin method. Both classical conservative and recent non-conservative formulations of SPH can be derived and emulated. The non-conservative forms can be made conservative by adding terms that are zero within the approximation at the expense of boundary-value considerations. The familiar Monaghan viscosity is used. Test calculations of uniformly expanding fluids, the Swegle example, spinning solid disks, impacting bars, and spherically symmetric flow illustrate the superiority of the technique over SPH. In all cases it is seen that the marvelous ability of the MLS interpolants to add up correctly everywhere civilizes the noisy, unpredictable nature of SPH. Being a relatively minor perturbation of the SPH method, it is easily retrofitted into existing SPH codes. On the down side, computational expense at this point is significant, the Monaghan viscosity undoes the contribution of the MLS interpolants, and one-point quadrature (colocation) is not accurate enough. Solutions to these difficulties are being pursued vigorously.

  17. A least squares procedure for calculating the calibration constants of a portable gamma-ray spectrometer.

    PubMed

    Ribeiro, F B; Carlos, D U; Hiodo, F Y; Strobino, E F

    2005-01-01

    In this study, a least squares procedure for calculating the calibration constants of a portable gamma-ray spectrometer using the general inverse matrix method is presented. The procedure weights the model equations fitting to the calibration data, taking into account the variances in the counting rates and in the radioactive standard concentrations. The application of the described procedure is illustrated by calibrating twice the same gamma-ray spectrometer, with two independent data sets collected approximately 18 months apart in the same calibration facility.

  18. Analyzing industrial energy use through ordinary least squares regression models

    NASA Astrophysics Data System (ADS)

    Golden, Allyson Katherine

    Extensive research has been performed using regression analysis and calibrated simulations to create baseline energy consumption models for residential buildings and commercial institutions. However, few attempts have been made to discuss the applicability of these methodologies to establish baseline energy consumption models for industrial manufacturing facilities. In the few studies of industrial facilities, the presented linear change-point and degree-day regression analyses illustrate ideal cases. It follows that there is a need in the established literature to discuss the methodologies and to determine their applicability for establishing baseline energy consumption models of industrial manufacturing facilities. The thesis determines the effectiveness of simple inverse linear statistical regression models when establishing baseline energy consumption models for industrial manufacturing facilities. Ordinary least squares change-point and degree-day regression methods are used to create baseline energy consumption models for nine different case studies of industrial manufacturing facilities located in the southeastern United States. The influence of ambient dry-bulb temperature and production on total facility energy consumption is observed. The energy consumption behavior of industrial manufacturing facilities is only sometimes sufficiently explained by temperature, production, or a combination of the two variables. This thesis also provides methods for generating baseline energy models that are straightforward and accessible to anyone in the industrial manufacturing community. The methods outlined in this thesis may be easily replicated by anyone that possesses basic spreadsheet software and general knowledge of the relationship between energy consumption and weather, production, or other influential variables. With the help of simple inverse linear regression models, industrial manufacturing facilities may better understand their energy consumption and

  19. Data-adapted moving least squares method for 3-D image interpolation

    NASA Astrophysics Data System (ADS)

    Jang, Sumi; Nam, Haewon; Lee, Yeon Ju; Jeong, Byeongseon; Lee, Rena; Yoon, Jungho

    2013-12-01

    In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons.

  20. The Least Squares Stochastic Finite Element Method in Structural Stability Analysis of Steel Skeletal Structures

    NASA Astrophysics Data System (ADS)

    Kamiński, M.; Szafran, J.

    2015-05-01

    The main purpose of this work is to verify the influence of the weighting procedure in the Least Squares Method on the probabilistic moments resulting from the stability analysis of steel skeletal structures. We discuss this issue also in the context of the geometrical nonlinearity appearing in the Stochastic Finite Element Method equations for the stability analysis and preservation of the Gaussian probability density function employed to model the Young modulus of a structural steel in this problem. The weighting procedure itself (with both triangular and Dirac-type) shows rather marginal influence on all probabilistic coefficients under consideration. This hybrid stochastic computational technique consisting of the FEM and computer algebra systems (ROBOT and MAPLE packages) may be used for analogous nonlinear analyses in structural reliability assessment.

  1. FOSLS (first-order systems least squares): An overivew

    SciTech Connect

    Manteuffel, T.A.

    1996-12-31

    The process of modeling a physical system involves creating a mathematical model, forming a discrete approximation, and solving the resulting linear or nonlinear system. The mathematical model may take many forms. The particular form chosen may greatly influence the ease and accuracy with which it may be discretized as well as the properties of the resulting linear or nonlinear system. If a model is chosen incorrectly it may yield linear systems with undesirable properties such as nonsymmetry or indefiniteness. On the other hand, if the model is designed with the discretization process and numerical solution in mind, it may be possible to avoid these undesirable properties.

  2. Statistical error in isothermal titration calorimetry: variance function estimation from generalized least squares.

    PubMed

    Tellinghuisen, Joel

    2005-08-01

    The method of generalized least squares (GLS) is used to assess the variance function for isothermal titration calorimetry (ITC) data collected for the 1:1 complexation of Ba(2+) with 18-crown-6 ether. In the GLS method, the least squares (LS) residuals from the data fit are themselves fitted to a variance function, with iterative adjustment of the weighting function in the data analysis to produce consistency. The data are treated in a pooled fashion, providing 321 fitted residuals from 35 data sets in the final analysis. Heteroscedasticity (nonconstant variance) is clearly indicated. Data error terms proportional to q(i) and q(i)/v are well defined statistically, where q(i) is the heat from the ith injection of titrant and v is the injected volume. The statistical significance of the variance function parameters is confirmed through Monte Carlo calculations that mimic the actual data set. For the data in question, which fall mostly in the range of q(i)=100-2000 microcal, the contributions to the data variance from the terms in q(i)(2) typically exceed the background constant term for q(i)>300 microcal and v<10 microl. Conversely, this means that in reactions with q(i) much less than this, heteroscedasticity is not a significant problem. Accordingly, in such cases the standard unweighted fitting procedures provide reliable results for the key parameters, K and DeltaH(degrees) and their statistical errors. These results also support an important earlier finding: in most ITC work on 1:1 binding processes, the optimal number of injections is 7-10, which is a factor of 3 smaller than the current norm. For high-q reactions, where weighting is needed for optimal LS analysis, tips are given for using the weighting option in the commercial software commonly employed to process ITC data. PMID:15936713

  3. Multivariate curve resolution-alternating least squares and kinetic modeling applied to near-infrared data from curing reactions of epoxy resins: mechanistic approach and estimation of kinetic rate constants.

    PubMed

    Garrido, M; Larrechi, M S; Rius, F X

    2006-02-01

    This study describes the combination of multivariate curve resolution-alternating least squares with a kinetic modeling strategy for obtaining the kinetic rate constants of a curing reaction of epoxy resins. The reaction between phenyl glycidyl ether and aniline is monitored by near-infrared spectroscopy under isothermal conditions for several initial molar ratios of the reagents. The data for all experiments, arranged in a column-wise augmented data matrix, are analyzed using multivariate curve resolution-alternating least squares. The concentration profiles recovered are fitted to a chemical model proposed for the reaction. The selection of the kinetic model is assisted by the information contained in the recovered concentration profiles. The nonlinear fitting provides the kinetic rate constants. The optimized rate constants are in agreement with values reported in the literature.

  4. A Bayesian least squares support vector machines based framework for fault diagnosis and failure prognosis

    NASA Astrophysics Data System (ADS)

    Khawaja, Taimoor Saleem

    A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior

  5. Least squares arrival time estimators for photons detected using a photomultiplier tube

    SciTech Connect

    Petrick, N.; Hero, A.O. III; Clinthorne, N.H.; Rogers, W.L. . Dept. of Electrical Engineering and Computer Science)

    1992-08-01

    In many applications employing photodetectors, the determination of the arrival time of individual photons at the surface of the detector can be used to localize the photon source. For the case where the photon intensity is extremely low, the most common type of detector used is the photomultiplier tube. The optimal arrival time estimators for single and multiple photons arriving at the surface of a photomultiplier tube are developed in this paper. The optimal timing estimator considered is a weighted non-linear least squares estimate of the detection time for a high gain PMT with gaussian statistics. The lease squares estimator is constructed using the mean and covariance function of the photomultiplier output for different arrival times. The RMS error for the leant squares arrival time estimator was calculated and compared with the performance of other common timing estimators, including the first photoelectron timing estimators, using a Burle/RCE 8850 PMT.

  6. Multidimensional least-squares resolution of Raman spectra from intermediates in sensitized photochemical reactions

    SciTech Connect

    Fister, J.C. III; Harris, J.M.

    1995-12-01

    Transient resonance Raman spectroscopy is used to elicit reaction kinetics and intermediate spectra from sensitized photochemical reactions. Nonlinear least-squares analysis of Raman spectra of a triplet-state photosensitizer (benzophenone), acquired as a function of laser intensity and/or quencher concentration allow the Raman spectra of the sensitizer excited state and intermediate photoproducts to be resolved from the spectra of the ground state and solvent. In cases where physical models describing the system kinetics cannot be found, factor analysis techniques are used to obtain the intermediate spectra. Raman spectra of triplet state benzophenone and acetophenone, obtained as a function of laser excitation kinetics, and the Raman spectra of intermediates formed by energy transfer (triplet-state biacetyl) and hydrogen abstraction (benzhydrol radical) are discussed.

  7. Nonlinear fitness landscape of a molecular pathway.

    PubMed

    Perfeito, Lilia; Ghozzi, Stéphane; Berg, Johannes; Schnetz, Karin; Lässig, Michael

    2011-07-01

    Genes are regulated because their expression involves a fitness cost to the organism. The production of proteins by transcription and translation is a well-known cost factor, but the enzymatic activity of the proteins produced can also reduce fitness, depending on the internal state and the environment of the cell. Here, we map the fitness costs of a key metabolic network, the lactose utilization pathway in Escherichia coli. We measure the growth of several regulatory lac operon mutants in different environments inducing expression of the lac genes. We find a strikingly nonlinear fitness landscape, which depends on the production rate and on the activity rate of the lac proteins. A simple fitness model of the lac pathway, based on elementary biophysical processes, predicts the growth rate of all observed strains. The nonlinearity of fitness is explained by a feedback loop: production and activity of the lac proteins reduce growth, but growth also affects the density of these molecules. This nonlinearity has important consequences for molecular function and evolution. It generates a cliff in the fitness landscape, beyond which populations cannot maintain growth. In viable populations, there is an expression barrier of the lac genes, which cannot be exceeded in any stationary growth process. Furthermore, the nonlinearity determines how the fitness of operon mutants depends on the inducer environment. We argue that fitness nonlinearities, expression barriers, and gene-environment interactions are generic features of fitness landscapes for metabolic pathways, and we discuss their implications for the evolution of regulation. PMID:21814515

  8. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  9. The Least-Squares Calibration on the Micro-Arcsecond Metrology Test Bed

    NASA Technical Reports Server (NTRS)

    Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.

    2006-01-01

    The Space Interferometry Mission (S1M) will measure optical path differences (OPDs) with an accuracy of tens of picometers, requiring precise calibration of the instrument. In this article, we present a calibration approach based on fitting star light interference fringes in the interferometer using a least-squares algorithm. The algorithm is first analyzed for the case of a monochromatic light source with a monochromatic fringe model. Using fringe data measured on the Micro-Arcsecond Metrology (MAM) testbed with a laser source, the error in the determination of the wavelength is shown to be less than 10pm. By using a quasi-monochromatic fringe model, the algorithm can be extended to the case of a white light source with a narrow detection bandwidth. In SIM, because of the finite bandwidth of each CCD pixel, the effect of the fringe envelope can not be neglected, especially for the larger optical path difference range favored for the wavelength calibration.

  10. Lameness detection challenges in automated milking systems addressed with partial least squares discriminant analysis.

    PubMed

    Garcia, E; Klaas, I; Amigo, J M; Bro, R; Enevoldsen, C

    2014-12-01

    Lameness causes decreased animal welfare and leads to higher production costs. This study explored data from an automatic milking system (AMS) to model on-farm gait scoring from a commercial farm. A total of 88 cows were gait scored once per week, for 2 5-wk periods. Eighty variables retrieved from AMS were summarized week-wise and used to predict 2 defined classes: nonlame and clinically lame cows. Variables were represented with 2 transformations of the week summarized variables, using 2-wk data blocks before gait scoring, totaling 320 variables (2 × 2 × 80). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3 or 4/4) or not lame (score 1/4). Both models achieved sensitivity and specificity values around 80%, both in calibration and cross-validation. At the optimum values in the receiver operating characteristic curve, the false-positive rate was 28% in the parity 1 model, whereas in the parity 2 model it was about half (16%), which makes it more suitable for practical application; the model error rates were, 23 and 19%, respectively. Based on data registered automatically from one AMS farm, we were able to discriminate nonlame and lame cows, where partial least squares discriminant analysis achieved similar performance to the reference method. PMID:25282423

  11. Discrete variable representation in electronic structure theory: quadrature grids for least-squares tensor hypercontraction.

    PubMed

    Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David

    2013-05-21

    We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes.

  12. Discrete variable representation in electronic structure theory: quadrature grids for least-squares tensor hypercontraction.

    PubMed

    Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David

    2013-05-21

    We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes. PMID:23697409

  13. Discrete variable representation in electronic structure theory: Quadrature grids for least-squares tensor hypercontraction

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David

    2013-05-01

    We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes.

  14. Comparison of structural and least-squares lines for estimating geologic relations

    USGS Publications Warehouse

    Williams, G.P.; Troutman, B.M.

    1990-01-01

    Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.

  15. Round-off error propagation in four generally applicable, recursive, least-squares-estimation schemes

    NASA Technical Reports Server (NTRS)

    Verhaegen, M. H.

    1987-01-01

    The numerical robustness of four generally applicable, recursive, least-squares-estimation schemes is analyzed by means of a theoretical round-off propagation study. This study highlights a number of practical, interesting insights of widely used recursive least-squares schemes. These insights have been confirmed in an experimental study as well.

  16. First-Order System Least-Squares for Second-Order Elliptic Problems with Discontinuous Coefficients

    NASA Technical Reports Server (NTRS)

    Manteuffel, Thomas A.; McCormick, Stephen F.; Starke, Gerhard

    1996-01-01

    The first-order system least-squares methodology represents an alternative to standard mixed finite element methods. Among its advantages is the fact that the finite element spaces approximating the pressure and flux variables are not restricted by the inf-sup condition and that the least-squares functional itself serves as an appropriate error measure. This paper studies the first-order system least-squares approach for scalar second-order elliptic boundary value problems with discontinuous coefficients. Ellipticity of an appropriately scaled least-squares bilinear form of the size of the jumps in the coefficients leading to adequate finite element approximation results. The occurrence of singularities at interface corners and cross-points is discussed. and a weighted least-squares functional is introduced to handle such cases. Numerical experiments are presented for two test problems to illustrate the performance of this approach.

  17. A least-squares parameter estimation algorithm for switched hammerstein systems with applications to the VOR

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Kearney, Robert E.; Galiana, Henrietta L.

    2005-01-01

    A "Multimode" or "switched" system is one that switches between various modes of operation. When a switch occurs from one mode to another, a discontinuity may result followed by a smooth evolution under the new regime. Characterizing the switching behavior of these systems is not well understood and, therefore, identification of multimode systems typically requires a preprocessing step to classify the observed data according to a mode of operation. A further consequence of the switched nature of these systems is that data available for parameter estimation of any subsystem may be inadequate. As such, identification and parameter estimation of multimode systems remains an unresolved problem. In this paper, we 1) show that the NARMAX model structure can be used to describe the impulsive-smooth behavior of switched systems, 2) propose a modified extended least squares (MELS) algorithm to estimate the coefficients of such models, and 3) demonstrate its applicability to simulated and real data from the Vestibulo-Ocular Reflex (VOR). The approach will also allow the identification of other nonlinear bio-systems, suspected of containing "hard" nonlinearities.

  18. Multiple concurrent recursive least squares identification with application to on-line spacecraft mass-property identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward (Inventor)

    2006-01-01

    The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.

  19. A Pascal program for weighted least squares regression on a microcomputer.

    PubMed

    Tyson, H; Henderson, H; McKenna, M

    1982-10-01

    Weighted least-squares regression has been programmed in Pascal for a microcomputer. A double precision Pascal compiler and the Motorola 6809 assembler produce a fast machine-code program occupying 22,000 bytes of memory when appended to the Pascal run-time module. Large data sets fit in the remaining memory. A regression with 72 observations and 24 parameters runs in 7 min, excluding optional print out of large matrices. The maximum dimensions of the design matrix, X, can be altered by modifying two Pascal constants. Minor changes to the Pascal source program will make it compatible with other Pascal compilers. The program optionally orthogonalises the X matrix to detect linearly-dependent columns in X, and/or generate orthogonal parameter estimates. After orthogonalizing X and fitting the model, the parameter estimates for the original X can be retrieved by the program. Regressions on a repeatedly reduced model are performed through elimination of columns in X until the minimum adequate model is obtained. PMID:6897216

  20. [The net analyte preprocessing combined with radial basis partial least squares regression applied in noninvasive measurement of blood glucose].

    PubMed

    Li, Qing-Bo; Huang, Zheng-Wei

    2014-02-01

    In order to improve the prediction accuracy of quantitative analysis model in the near-infrared spectroscopy of blood glucose, this paper, by combining net analyte preprocessing (NAP) algorithm and radial basis functions partial least squares (RBFPLS) regression, builds a nonlinear model building method which is suitable for glucose measurement of human, named as NAP-RBFPLS. First, NAP is used to pre-process the near-infrared spectroscopy of blood glucose, in order to effectively extract the information which only relates to glucose signal from the original near-infrared spectra, so that it could effectively weaken the occasional correlation problems of the glucose changes and the interference factors which are caused by the absorption of water, albumin, hemoglobin, fat and other components of the blood in human body, the change of temperature of human body, the drift of measuring instruments, the changes of measuring environment, and the changes of measuring conditions; and then a nonlinear quantitative analysis model is built with the near-infrared spectroscopy data after NAP, in order to solve the nonlinear relationship between glucose concentrations and near-infrared spectroscopy which is caused by body strong scattering. In this paper, the new method is compared with other three quantitative analysis models building on partial least squares (PLS), net analyte preprocessing partial least squares (NAP-PLS) and RBFPLS respectively. At last, the experimental results show that the nonlinear calibration model, developed by combining NAP algorithm and RBFPLS regression, which was put forward in this paper, greatly improves the prediction accuracy of prediction sets, and what has been proved in this paper is that the nonlinear model building method will produce practical applications for the research of non-invasive detection techniques on human glucose concentrations.

  1. A class of least-squares filtering and identification algorithms with systolic array architectures

    NASA Technical Reports Server (NTRS)

    Kalson, Seth Z.; Yao, Kung

    1991-01-01

    A unified approach is presented for deriving a large class of new and previously known time- and order-recursive least-squares algorithms with systolic array architectures, suitable for high-throughput-rate and VLSI implementations of space-time filtering and system identification problems. The geometrical derivation given is unique in that no assumption is made concerning the rank of the sample data correlation matrix. This method utilizes and extends the concept of oblique projections, as used previously in the derivations of the least-squares lattice algorithms. Exponentially weighted least-squares criteria are considered for both sliding and growing memory.

  2. Multi-element array signal reconstruction with adaptive least-squares algorithms

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1992-01-01

    Two versions of the adaptive least-squares algorithm are presented for combining signals from multiple feeds placed in the focal plane of a mechanical antenna whose reflector surface is distorted due to various deformations. Coherent signal combining techniques based on the adaptive least-squares algorithm are examined for nearly optimally and adaptively combining the outputs of the feeds. The performance of the two versions is evaluated by simulations. It is demonstrated for the example considered that both of the adaptive least-squares algorithms are capable of offsetting most of the loss in the antenna gain incurred due to reflector surface deformations.

  3. Domain Decomposition Algorithms for First-Order System Least Squares Methods

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

  4. A least squares finite element scheme for transonic flow around harmonically oscillating airfoils

    NASA Technical Reports Server (NTRS)

    Cox, C. L.; Fix, G. J.; Gunzburger, M. D.

    1983-01-01

    The present investigation shows that a finite element scheme with a weighted least squares variational principle is applicable to the problem of transonic flow around a harmonically oscillating airfoil. For the flat plate case, numerical results compare favorably with the exact solution. The obtained numerical results for the transonic problem, for which an exact solution is not known, have the characteristics of known experimental results. It is demonstrated that the performance of the employed numerical method is independent of equation type (elliptic or hyperbolic) and frequency. The weighted least squares principle allows the appropriate modeling of singularities, which such a modeling of singularities is not possible with normal least squares.

  5. Least-squares methods involving the H{sup -1} inner product

    SciTech Connect

    Pasciak, J.

    1996-12-31

    Least-squares methods are being shown to be an effective technique for the solution of elliptic boundary value problems. However, the methods differ depending on the norms in which they are formulated. For certain problems, it is much more natural to consider least-squares functionals involving the H{sup -1} norm. Such norms give rise to improved convergence estimates and better approximation to problems with low regularity solutions. In addition, fewer new variables need to be added and less stringent boundary conditions need to be imposed. In this talk, I will describe some recent developments involving least-squares methods utilizing the H{sup -1} inner product.

  6. Multilevel solvers of first-order system least-squares for Stokes equations

    SciTech Connect

    Lai, Chen-Yao G.

    1996-12-31

    Recently, The use of first-order system least squares principle for the approximate solution of Stokes problems has been extensively studied by Cai, Manteuffel, and McCormick. In this paper, we study multilevel solvers of first-order system least-squares method for the generalized Stokes equations based on the velocity-vorticity-pressure formulation in three dimensions. The least-squares functionals is defined to be the sum of the L{sup 2}-norms of the residuals, which is weighted appropriately by the Reynolds number. We develop convergence analysis for additive and multiplicative multilevel methods applied to the resulting discrete equations.

  7. The crux of the method: assumptions in ordinary least squares and logistic regression.

    PubMed

    Long, Rebecca G

    2008-10-01

    Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.

  8. Least-squares/parabolized Navier-Stokes procedure for optimizing hypersonic wind tunnel nozzles

    NASA Technical Reports Server (NTRS)

    Korte, John J.; Kumar, Ajay; Singh, D. J.; Grossman, B.

    1991-01-01

    A new procedure is demonstrated for optimizing hypersonic wind-tunnel-nozzle contours. The procedure couples a CFD computer code to an optimization algorithm, and is applied to both conical and contoured hypersonic nozzles for the purpose of determining an optimal set of parameters to describe the surface geometry. A design-objective function is specified based on the deviation from the desired test-section flow-field conditions. The objective function is minimized by optimizing the parameters used to describe the nozzle contour based on the solution to a nonlinear least-squares problem. The effect of the changes in the nozzle wall parameters are evaluated by computing the nozzle flow using the parabolized Navier-Stokes equations. The advantage of the new procedure is that it directly takes into account the displacement effect of the boundary layer on the wall contour. The new procedure provides a method for optimizing hypersonic nozzles of high Mach numbers which have been designed by classical procedures, but are shown to produce poor flow quality due to the large boundary layers present in the test section. The procedure is demonstrated by finding the optimum design parameters for a Mach 10 conical nozzle and a Mach 6 and a Mach 15 contoured nozzle.

  9. Multi-frequency Phase Unwrap from Noisy Data: Adaptive Least Squares Approach

    NASA Astrophysics Data System (ADS)

    Katkovnik, Vladimir; Bioucas-Dias, José

    2010-04-01

    Multiple frequency interferometry is, basically, a phase acquisition strategy aimed at reducing or eliminating the ambiguity of the wrapped phase observations or, equivalently, reducing or eliminating the fringe ambiguity order. In multiple frequency interferometry, the phase measurements are acquired at different frequencies (or wavelengths) and recorded using the corresponding sensors (measurement channels). Assuming that the absolute phase to be reconstructed is piece-wise smooth, we use a nonparametric regression technique for the phase reconstruction. The nonparametric estimates are derived from a local least squares criterion, which, when applied to the multifrequency data, yields denoised (filtered) phase estimates with extended ambiguity (periodized), compared with the phase ambiguities inherent to each measurement frequency. The filtering algorithm is based on local polynomial (LPA) approximation for design of nonlinear filters (estimators) and adaptation of these filters to unknown smoothness of the spatially varying absolute phase [9]. For phase unwrapping, from filtered periodized data, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [1]. Simulations give evidence that the proposed algorithm yields state-of-the-art performance for continuous as well as for discontinues phase surfaces, enabling phase unwrapping in extraordinary difficult situations when all other algorithms fail.

  10. Fishery landing forecasting using EMD-based least square support vector machine models

    NASA Astrophysics Data System (ADS)

    Shabri, Ani

    2015-05-01

    In this paper, the novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EMD) and least square support machine (LSSVM) is proposed to improve the accuracy of fishery landing forecasting. This hybrid is formulated specifically to address in modeling fishery landing, which has high nonlinear, non-stationary and seasonality time series which can hardly be properly modelled and accurately forecasted by traditional statistical models. In the hybrid model, EMD is used to decompose original data into a finite and often small number of sub-series. The each sub-series is modeled and forecasted by a LSSVM model. Finally the forecast of fishery landing is obtained by aggregating all forecasting results of sub-series. To assess the effectiveness and predictability of EMD-LSSVM, monthly fishery landing record data from East Johor of Peninsular Malaysia, have been used as a case study. The result shows that proposed model yield better forecasts than Autoregressive Integrated Moving Average (ARIMA), LSSVM and EMD-ARIMA models on several criteria..

  11. Least squares solutions of the HJB equation with neural network value-function approximators.

    PubMed

    Tassa, Yuval; Erez, Tom

    2007-07-01

    In this paper, we present an empirical study of iterative least squares minimization of the Hamilton-Jacobi-Bellman (HJB) residual with a neural network (NN) approximation of the value function. Although the nonlinearities in the optimal control problem and NN approximator preclude theoretical guarantees and raise concerns of numerical instabilities, we present two simple methods for promoting convergence, the effectiveness of which is presented in a series of experiments. The first method involves the gradual increase of the horizon time scale, with a corresponding gradual increase in value function complexity. The second method involves the assumption of stochastic dynamics which introduces a regularizing second derivative term to the HJB equation. A gradual reduction of this term provides further stabilization of the convergence. We demonstrate the solution of several problems, including the 4-D inverted-pendulum system with bounded control. Our approach requires no initial stabilizing policy or any restrictive assumptions on the plant or cost function, only knowledge of the plant dynamics. In the Appendix, we provide the equations for first- and second-order differential backpropagation. PMID:17668659

  12. Least squares solutions of the HJB equation with neural network value-function approximators.

    PubMed

    Tassa, Yuval; Erez, Tom

    2007-07-01

    In this paper, we present an empirical study of iterative least squares minimization of the Hamilton-Jacobi-Bellman (HJB) residual with a neural network (NN) approximation of the value function. Although the nonlinearities in the optimal control problem and NN approximator preclude theoretical guarantees and raise concerns of numerical instabilities, we present two simple methods for promoting convergence, the effectiveness of which is presented in a series of experiments. The first method involves the gradual increase of the horizon time scale, with a corresponding gradual increase in value function complexity. The second method involves the assumption of stochastic dynamics which introduces a regularizing second derivative term to the HJB equation. A gradual reduction of this term provides further stabilization of the convergence. We demonstrate the solution of several problems, including the 4-D inverted-pendulum system with bounded control. Our approach requires no initial stabilizing policy or any restrictive assumptions on the plant or cost function, only knowledge of the plant dynamics. In the Appendix, we provide the equations for first- and second-order differential backpropagation.

  13. [Main Components of Xinjiang Lavender Essential Oil Determined by Partial Least Squares and Near Infrared Spectroscopy].

    PubMed

    Liao, Xiang; Wang, Qing; Fu, Ji-hong; Tang, Jun

    2015-09-01

    This work was undertaken to establish a quantitative analysis model which can rapid determinate the content of linalool, linalyl acetate of Xinjiang lavender essential oil. Totally 165 lavender essential oil samples were measured by using near infrared absorption spectrum (NIR), after analyzing the near infrared spectral absorption peaks of all samples, lavender essential oil have abundant chemical information and the interference of random noise may be relatively low on the spectral intervals of 7100~4500 cm(-1). Thus, the PLS models was constructed by using this interval for further analysis. 8 abnormal samples were eliminated. Through the clustering method, 157 lavender essential oil samples were divided into 105 calibration set samples and 52 validation set samples. Gas chromatography mass spectrometry (GC-MS) was used as a tool to determine the content of linalool and linalyl acetate in lavender essential oil. Then the matrix was established with the GC-MS raw data of two compounds in combination with the original NIR data. In order to optimize the model, different pretreatment methods were used to preprocess the raw NIR spectral to contrast the spectral filtering effect, after analysizing the quantitative model results of linalool and linalyl acetate, the root mean square error prediction (RMSEP) of orthogonal signal transformation (OSC) was 0.226, 0.558, spectrally, it was the optimum pretreatment method. In addition, forward interval partial least squares (FiPLS) method was used to exclude the wavelength points which has nothing to do with determination composition or present nonlinear correlation, finally 8 spectral intervals totally 160 wavelength points were obtained as the dataset. Combining the data sets which have optimized by OSC-FiPLS with partial least squares (PLS) to establish a rapid quantitative analysis model for determining the content of linalool and linalyl acetate in Xinjiang lavender essential oil, numbers of hidden variables of two

  14. [Main Components of Xinjiang Lavender Essential Oil Determined by Partial Least Squares and Near Infrared Spectroscopy].

    PubMed

    Liao, Xiang; Wang, Qing; Fu, Ji-hong; Tang, Jun

    2015-09-01

    This work was undertaken to establish a quantitative analysis model which can rapid determinate the content of linalool, linalyl acetate of Xinjiang lavender essential oil. Totally 165 lavender essential oil samples were measured by using near infrared absorption spectrum (NIR), after analyzing the near infrared spectral absorption peaks of all samples, lavender essential oil have abundant chemical information and the interference of random noise may be relatively low on the spectral intervals of 7100~4500 cm(-1). Thus, the PLS models was constructed by using this interval for further analysis. 8 abnormal samples were eliminated. Through the clustering method, 157 lavender essential oil samples were divided into 105 calibration set samples and 52 validation set samples. Gas chromatography mass spectrometry (GC-MS) was used as a tool to determine the content of linalool and linalyl acetate in lavender essential oil. Then the matrix was established with the GC-MS raw data of two compounds in combination with the original NIR data. In order to optimize the model, different pretreatment methods were used to preprocess the raw NIR spectral to contrast the spectral filtering effect, after analysizing the quantitative model results of linalool and linalyl acetate, the root mean square error prediction (RMSEP) of orthogonal signal transformation (OSC) was 0.226, 0.558, spectrally, it was the optimum pretreatment method. In addition, forward interval partial least squares (FiPLS) method was used to exclude the wavelength points which has nothing to do with determination composition or present nonlinear correlation, finally 8 spectral intervals totally 160 wavelength points were obtained as the dataset. Combining the data sets which have optimized by OSC-FiPLS with partial least squares (PLS) to establish a rapid quantitative analysis model for determining the content of linalool and linalyl acetate in Xinjiang lavender essential oil, numbers of hidden variables of two

  15. Beyond Principal Component Analysis: A Trilinear Decomposition Model and Least Squares Estimation.

    ERIC Educational Resources Information Center

    Pham, Tuan Dinh; Mocks, Joachim

    1992-01-01

    Sufficient conditions are derived for the consistency and asymptotic normality of the least squares estimator of a trilinear decomposition model for multiway data analysis. The limiting covariance matrix is computed. (Author/SLD)

  16. Recursive least squares method of regression coefficients estimation as a special case of Kalman filter

    NASA Astrophysics Data System (ADS)

    Borodachev, S. M.

    2016-06-01

    The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.

  17. Iterative least-squares solvers for the Navier-Stokes equations

    SciTech Connect

    Bochev, P.

    1996-12-31

    In the recent years finite element methods of least-squares type have attracted considerable attention from both mathematicians and engineers. This interest has been motivated, to a large extent, by several valuable analytic and computational properties of least-squares variational principles. In particular, finite element methods based on such principles circumvent Ladyzhenskaya-Babuska-Brezzi condition and lead to symmetric and positive definite algebraic systems. Thus, it is not surprising that numerical solution of fluid flow problems has been among the most promising and successful applications of least-squares methods. In this context least-squares methods offer significant theoretical and practical advantages in the algorithmic design, which makes resulting methods suitable, among other things, for large-scale numerical simulations.

  18. Least-squares finite element discretizations of neutron transport equations in 3 dimensions

    SciTech Connect

    Manteuffel, T.A; Ressel, K.J.; Starkes, G.

    1996-12-31

    The least-squares finite element framework to the neutron transport equation introduced in is based on the minimization of a least-squares functional applied to the properly scaled neutron transport equation. Here we report on some practical aspects of this approach for neutron transport calculations in three space dimensions. The systems of partial differential equations resulting from a P{sub 1} and P{sub 2} approximation of the angular dependence are derived. In the diffusive limit, the system is essentially a Poisson equation for zeroth moment and has a divergence structure for the set of moments of order 1. One of the key features of the least-squares approach is that it produces a posteriori error bounds. We report on the numerical results obtained for the minimum of the least-squares functional augmented by an additional boundary term using trilinear finite elements on a uniform tesselation into cubes.

  19. Computing ordinary least-squares parameter estimates for the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2013-01-01

    A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.

  20. Least-squares reverse time migration with and without source wavelet estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Qingchen; Zhou, Hui; Chen, Hanming; Wang, Jie

    2016-11-01

    Least-squares reverse time migration (LSRTM) attempts to find the best fit reflectivity model by minimizing the mismatching between the observed and simulated seismic data, where the source wavelet estimation is one of the crucial issues. We divide the frequency-domain observed seismic data by the numerical Green's function at the receiver nodes to estimate the source wavelet for the conventional LSRTM method, and propose the source-independent LSRTM based on a convolution-based objective function. The numerical Green's function can be simulated with a dirac wavelet and the migration velocity in the frequency or time domain. Compared to the conventional method with the additional source estimation procedure, the source-independent LSRTM is insensitive to the source wavelet and can still give full play to the amplitude-preserving ability even using an incorrect wavelet without the source estimation. In order to improve the anti-noise ability, we apply the robust hybrid norm objective function to both the methods and use the synthetic seismic data contaminated by the random Gaussian and spike noises with a signal-to-noise ratio of 5 dB to verify their feasibilities. The final migration images show that the source-independent algorithm is more robust and has a higher amplitude-preserving ability than the conventional source-estimated method.

  1. Analysis of crustal deformation and strain characteristics in the Tianshan Mountains with least-squares collocation

    NASA Astrophysics Data System (ADS)

    Li, S. P.; Chen, G.; Li, J. W.

    2015-11-01

    By fitting the observed velocity field of the Tianshan Mountains from 1992 to 2006 with least-squares collocation, we established a velocity field model in this region. The velocity field model reflects the crustal deformation characteristics of the Tianshan reasonably well. From the Tarim Basin to the Junggar Basin and Kazakh platform, the crustal deformation decreases gradually. Divided at 82° E, the convergence rates in the west are obviously higher than those in the east. We also calculated the parameter values for crustal strain in the Tianshan Mountains. The results for maximum shear strain exhibited a concentration of significantly high values at Wuqia and its western regions, and the values reached a maxima of 4.4×10-8 a-1. According to isogram distributions for the surface expansion rate, we found evidence that the Tianshan Mountains have been suffering from strong lateral extrusion by the basin on both sides. Combining this analysis with existing results for focal mechanism solutions from 1976 to 2014, we conclude that it should be easy for a concentration of earthquake events to occur in regions where maximum shear strains accumulate or mutate. For the Tianshan Mountains, the possibility of strong earthquakes in Wuqia-Jiashi and Lake Issyk-Kul will persist over the long term.

  2. Methods for Least Squares Data Smoothing by Adjustment of Divided Differences

    NASA Astrophysics Data System (ADS)

    Demetriou, I. C.

    2008-09-01

    A brief survey is presented for the main methods that are used in least squares data smoothing by adjusting the signs of divided differences of the smoothed values. The most distinctive feature of the smoothing approach is that it provides automatically a piecewise monotonic or a piecewise convex/concave fit to the data. The data are measured values of a function of one variable that contain random errors. As a consequence of the errors, the number of sign alterations in the sequence of mth divided differences is usually unacceptably large, where m is a prescribed positive integer. Therefore, we make the least sum of squares change to the measurements by requiring the sequence of the divided differences of order m to have at most k-1 sign changes, for some positive integer k. Although, it is a combinatorial problem, whose solution can require about O(nk) quadratic programming calculations in n variables and n-m constraints, where n is the number of data, very efficient algorithms have been developed for the cases when m = 1 or m = 2 and k is arbitrary, as well as when m>2 for small values of k. Attention is paid to the purpose of each method instead of to its details. Some software packages make the methods publicly accessible through library systems.

  3. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  4. Nucleus detection using gradient orientation information and linear least squares regression

    NASA Astrophysics Data System (ADS)

    Kwak, Jin Tae; Hewitt, Stephen M.; Xu, Sheng; Pinto, Peter A.; Wood, Bradford J.

    2015-03-01

    Computerized histopathology image analysis enables an objective, efficient, and quantitative assessment of digitized histopathology images. Such analysis often requires an accurate and efficient detection and segmentation of histological structures such as glands, cells and nuclei. The segmentation is used to characterize tissue specimens and to determine the disease status or outcomes. The segmentation of nuclei, in particular, is challenging due to the overlapping or clumped nuclei. Here, we propose a nuclei seed detection method for the individual and overlapping nuclei that utilizes the gradient orientation or direction information. The initial nuclei segmentation is provided by a multiview boosting approach. The angle of the gradient orientation is computed and traced for the nuclear boundaries. Taking the first derivative of the angle of the gradient orientation, high concavity points (junctions) are discovered. False junctions are found and removed by adopting a greedy search scheme with the goodness-of-fit statistic in a linear least squares sense. Then, the junctions determine boundary segments. Partial boundary segments belonging to the same nucleus are identified and combined by examining the overlapping area between them. Using the final set of the boundary segments, we generate the list of seeds in tissue images. The method achieved an overall precision of 0.89 and a recall of 0.88 in comparison to the manual segmentation.

  5. On sufficient statistics of least-squares superposition of vector sets.

    PubMed

    Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M

    2015-06-01

    The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.

  6. Speckle evolution with multiple steps of least-squares phase removal

    SciTech Connect

    Chen Mingzhou; Dainty, Chris; Roux, Filippus S.

    2011-08-15

    We study numerically the evolution of speckle fields due to the annihilation of optical vortices after the least-squares phase has been removed. A process with multiple steps of least-squares phase removal is carried out to minimize both vortex density and scintillation index. Statistical results show that almost all the optical vortices can be removed from a speckle field, which finally decays into a quasiplane wave after such an iterative process.

  7. On the accuracy of least squares methods in the presence of corner singularities

    NASA Technical Reports Server (NTRS)

    Cox, C. L.; Fix, G. J.

    1985-01-01

    Elliptic problems with corner singularities are discussed. Finite element approximations based on variational principles of the least squares type tend to display poor convergence properties in such contexts. Moreover, mesh refinement or the use of special singular elements do not appreciably improve matters. It is shown that if the least squares formulation is done in appropriately weighted space, then optimal convergence results in unweighted spaces like L(2).

  8. Multi-element least square HDMR methods and their applications for stochastic multiscale model reduction

    SciTech Connect

    Jiang, Lijian Li, Xinping

    2015-08-01

    Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in each subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain

  9. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  10. Least-Squares Regression and Spectral Residual Augmented Classical Least-Squares Chemometric Models for Stability-Indicating Analysis of Agomelatine and Its Degradation Products: A Comparative Study.

    PubMed

    Naguib, Ibrahim A; Abdelrahman, Maha M; El Ghobashy, Mohamed R; Ali, Nesma A

    2016-01-01

    Two accurate, sensitive, and selective stability-indicating methods are developed and validated for simultaneous quantitative determination of agomelatine (AGM) and its forced degradation products (Deg I and Deg II), whether in pure forms or in pharmaceutical formulations. Partial least-squares regression (PLSR) and spectral residual augmented classical least-squares (SRACLS) are two chemometric models that are being subjected to a comparative study through handling UV spectral data in range (215-350 nm). For proper analysis, a three-factor, four-level experimental design was established, resulting in a training set consisting of 16 mixtures containing different ratios of interfering species. An independent test set consisting of eight mixtures was used to validate the prediction ability of the suggested models. The results presented indicate the ability of mentioned multivariate calibration models to analyze AGM, Deg I, and Deg II with high selectivity and accuracy. The analysis results of the pharmaceutical formulations were statistically compared to the reference HPLC method, with no significant differences observed regarding accuracy and precision. The SRACLS model gives comparable results to the PLSR model; however, it keeps the qualitative spectral information of the classical least-squares algorithm for analyzed components. PMID:26987554

  11. Communication: Acceleration of coupled cluster singles and doubles via orbital-weighted least-squares tensor hypercontraction

    SciTech Connect

    Parrish, Robert M.; Sherrill, C. David; Hohenstein, Edward G.; Kokkila, Sara I. L.; Martínez, Todd J.

    2014-05-14

    We apply orbital-weighted least-squares tensor hypercontraction decomposition of the electron repulsion integrals to accelerate the coupled cluster singles and doubles (CCSD) method. Using accurate and flexible low-rank factorizations of the electron repulsion integral tensor, we are able to reduce the scaling of the most vexing particle-particle ladder term in CCSD from O(N{sup 6}) to O(N{sup 5}), with remarkably low error. Combined with a T{sub 1}-transformed Hamiltonian, this leads to substantial practical accelerations against an optimized density-fitted CCSD implementation.

  12. Optimization of contoured hypersonic scramjet inlets with a least-squares parabolized Navier-Stokes procedure

    NASA Technical Reports Server (NTRS)

    Korte, J. J.; Auslender, A. H.

    1993-01-01

    A new optimization procedure, in which a parabolized Navier-Stokes solver is coupled with a non-linear least-squares optimization algorithm, is applied to the design of a Mach 14, laminar two-dimensional hypersonic subscale flight inlet with an internal contraction ratio of 15:1 and a length-to-throat half-height ratio of 150:1. An automated numerical search of multiple geometric wall contours, which are defined by polynomical splines, results in an optimal geometry that yields the maximum total-pressure recovery for the compression process. Optimal inlet geometry is obtained for both inviscid and viscous flows, with the assumption that the gas is either calorically or thermally perfect. The analysis with a calorically perfect gas results in an optimized inviscid inlet design that is defined by two cubic splines and yields a mass-weighted total-pressure recovery of 0.787, which is a 23% improvement compared with the optimized shock-canceled two-ramp inlet design. Similarly, the design procedure obtains the optimized contour for a viscous calorically perfect gas to yield a mass-weighted total-pressure recovery value of 0.749. Additionally, an optimized contour for a viscous thermally perfect gas is obtained to yield a mass-weighted total-pressure recovery value of 0.768. The design methodology incorporates both complex fluid dynamic physics and optimal search techniques without an excessive compromise of computational speed; hence, this methodology is a practical technique that is applicable to optimal inlet design procedures.

  13. Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time

    NASA Technical Reports Server (NTRS)

    Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.

    1993-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.

  14. The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.

    PubMed

    Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico

    2016-04-01

    This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift. PMID:26571523

  15. The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.

    PubMed

    Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico

    2016-04-01

    This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift.

  16. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares.

    PubMed

    Boccard, Julien; Rudaz, Serge

    2016-05-12

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models.

  17. Fruit fly optimization based least square support vector regression for blind image restoration

    NASA Astrophysics Data System (ADS)

    Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei

    2014-11-01

    The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and

  18. A variant of sparse partial least squares for variable selection and data exploration

    PubMed Central

    Olson Hunt, Megan J.; Weissfeld, Lisa; Boudreau, Robert M.; Aizenstein, Howard; Newman, Anne B.; Simonsick, Eleanor M.; Van Domelen, Dane R.; Thomas, Fridtjof; Yaffe, Kristine; Rosano, Caterina

    2014-01-01

    When data are sparse and/or predictors multicollinear, current implementation of sparse partial least squares (SPLS) does not give estimates for non-selected predictors nor provide a measure of inference. In response, an approach termed “all-possible” SPLS is proposed, which fits a SPLS model for all tuning parameter values across a set grid. Noted is the percentage of time a given predictor is chosen, as well as the average non-zero parameter estimate. Using a “large” number of multicollinear predictors, simulation confirmed variables not associated with the outcome were least likely to be chosen as sparsity increased across the grid of tuning parameters, while the opposite was true for those strongly associated. Lastly, variables with a weak association were chosen more often than those with no association, but less often than those with a strong relationship to the outcome. Similarly, predictors most strongly related to the outcome had the largest average parameter estimate magnitude, followed by those with a weak relationship, followed by those with no relationship. Across two independent studies regarding the relationship between volumetric MRI measures and a cognitive test score, this method confirmed a priori hypotheses about which brain regions would be selected most often and have the largest average parameter estimates. In conclusion, the percentage of time a predictor is chosen is a useful measure for ordering the strength of the relationship between the independent and dependent variables, serving as a form of inference. The average parameter estimates give further insight regarding the direction and strength of association. As a result, all-possible SPLS gives more information than the dichotomous output of traditional SPLS, making it useful when undertaking data exploration and hypothesis generation for a large number of potential predictors. PMID:24624079

  19. A method based on moving least squares for XRII image distortion correction

    SciTech Connect

    Yan Shiju; Wang Chengtao; Ye Ming

    2007-11-15

    This paper presents a novel integrated method to correct geometric distortions of XRII (x-ray image intensifier) images. The method has been compared, in terms of mean-squared residual error measured at control and intermediate points, with two traditional local methods and a traditional global methods. The proposed method is based on the methods of moving least squares (MLS) and polynomial fitting. Extensive experiments were performed on simulated and real XRII images. In simulation, the effect of pincushion distortion, sigmoidal distortion, local distortion, noise, and the number of control points was tested. The traditional local methods were sensitive to pincushion and sigmoidal distortion. The traditional global method was only sensitive to sigmoidal distortion. The proposed method was found neither sensitive to pincushion distortion nor sensitive to sigmoidal distortion. The sensitivity of the proposed method to local distortion was lower than or comparable with that of the traditional global method. The sensitivity of the proposed method to noise was higher than that of all three traditional methods. Nevertheless, provided the standard deviation of noise was not greater than 0.1 pixels, accuracy of the proposed method is still higher than the traditional methods. The sensitivity of the proposed method to the number of control points was greatly lower than that of the traditional methods. Provided that a proper cutoff radius is chosen, accuracy of the proposed method is higher than that of the traditional methods. Experiments on real images, carried out by using a 9 in. XRII, showed that residual error of the proposed method (0.2544{+-}0.2479 pixels) is lower than that of the traditional global method (0.4223{+-}0.3879 pixels) and local methods (0.4555{+-}0.3518 pixels and 0.3696{+-}0.4019 pixels, respectively)

  20. A variant of sparse partial least squares for variable selection and data exploration.

    PubMed

    Olson Hunt, Megan J; Weissfeld, Lisa; Boudreau, Robert M; Aizenstein, Howard; Newman, Anne B; Simonsick, Eleanor M; Van Domelen, Dane R; Thomas, Fridtjof; Yaffe, Kristine; Rosano, Caterina

    2014-01-01

    When data are sparse and/or predictors multicollinear, current implementation of sparse partial least squares (SPLS) does not give estimates for non-selected predictors nor provide a measure of inference. In response, an approach termed "all-possible" SPLS is proposed, which fits a SPLS model for all tuning parameter values across a set grid. Noted is the percentage of time a given predictor is chosen, as well as the average non-zero parameter estimate. Using a "large" number of multicollinear predictors, simulation confirmed variables not associated with the outcome were least likely to be chosen as sparsity increased across the grid of tuning parameters, while the opposite was true for those strongly associated. Lastly, variables with a weak association were chosen more often than those with no association, but less often than those with a strong relationship to the outcome. Similarly, predictors most strongly related to the outcome had the largest average parameter estimate magnitude, followed by those with a weak relationship, followed by those with no relationship. Across two independent studies regarding the relationship between volumetric MRI measures and a cognitive test score, this method confirmed a priori hypotheses about which brain regions would be selected most often and have the largest average parameter estimates. In conclusion, the percentage of time a predictor is chosen is a useful measure for ordering the strength of the relationship between the independent and dependent variables, serving as a form of inference. The average parameter estimates give further insight regarding the direction and strength of association. As a result, all-possible SPLS gives more information than the dichotomous output of traditional SPLS, making it useful when undertaking data exploration and hypothesis generation for a large number of potential predictors.

  1. Influence of the least-squares phase on optical vortices in strongly scintillated beams

    SciTech Connect

    Chen Mingzhou; Roux, Filippus S.

    2009-07-15

    The optical vortices that exist in strongly scintillated beams make it difficult for conventional adaptive optics systems to remove the phase distortions. When the least-squares reconstructed phase is removed, the vortices still remain. However, we found that the removal of the least-squares phase induces a portion of the vortices to be annihilated during subsequent propagation, causing a reduction in the total number of vortices. This can be understood in terms of the restoration of equilibrium between explicit vortices, which are visible in the phase function, and vortex bound states, which are somehow encoded in the continuous phase fluctuations. Numerical simulations are provided to show that the total number of optical vortices in a strongly scintillated beam can be reduced significantly after a few steps of least-squares phase corrections.

  2. Least-squares reverse-time migration of Cranfield VSP data for monitoring CO2 injection

    NASA Astrophysics Data System (ADS)

    TAN, S.; Huang, L.

    2012-12-01

    Cost-effective monitoring for carbon utilization and sequestration requires high-resolution imaging with a minimal amount of data. Least-squares reverse-time migration is a promising imaging method for this purpose. We apply least-squares reverse-time migration to a portion of the 3D vertical seismic profile data acquired at the Cranfield enhanced oil recovery field in Mississippi for monitoring CO2 injection. Conventional reverse-time migration of limited data suffers from significant image artifacts and a poor image resolution. Lease-squares reverse-time migration can reduce image artifacts and improves the image resolution. We demonstrate the significant improvements of least-squares reverse-time migration by comparing its migration images of the Cranfield VSP data with that obtained using the conventional reverse-time migration.

  3. On the equivalence of Kalman filtering and least-squares estimation

    NASA Astrophysics Data System (ADS)

    Mysen, E.

    2016-07-01

    The Kalman filter is derived directly from the least-squares estimator, and generalized to accommodate stochastic processes with time variable memory. To complete the link between least-squares estimation and Kalman filtering of first-order Markov processes, a recursive algorithm is presented for the computation of the off-diagonal elements of the a posteriori least-squares error covariance. As a result of the algebraic equivalence of the two estimators, both approaches can fully benefit from the advantages implied by their individual perspectives. In particular, it is shown how Kalman filter solutions can be integrated into the normal equation formalism that is used for intra- and inter-technique combination of space geodetic data.

  4. A note on implementation of decaying product correlation structures for quasi-least squares.

    PubMed

    Shults, Justine; Guerra, Matthew W

    2014-08-30

    This note implements an unstructured decaying product matrix via the quasi-least squares approach for estimation of the correlation parameters in the framework of generalized estimating equations. The structure we consider is fairly general without requiring the large number of parameters that are involved in a fully unstructured matrix. It is straightforward to show that the quasi-least squares estimators of the correlation parameters yield feasible values for the unstructured decaying product structure. Furthermore, subject to conditions that are easily checked, the quasi-least squares estimators are valid for longitudinal Bernoulli data. We demonstrate implementation of the structure in a longitudinal clinical trial with both a continuous and binary outcome variable.

  5. Taking correlations in GPS least squares adjustments into account with a diagonal covariance matrix

    NASA Astrophysics Data System (ADS)

    Kermarrec, Gaël; Schön, Steffen

    2016-09-01

    Based on the results of Luati and Proietti (Ann Inst Stat Math 63:673-686, 2011) on an equivalence for a certain class of polynomial regressions between the diagonally weighted least squares (DWLS) and the generalized least squares (GLS) estimator, an alternative way to take correlations into account thanks to a diagonal covariance matrix is presented. The equivalent covariance matrix is much easier to compute than a diagonalization of the covariance matrix via eigenvalue decomposition which also implies a change of the least squares equations. This condensed matrix, for use in the least squares adjustment, can be seen as a diagonal or reduced version of the original matrix, its elements being simply the sums of the rows elements of the weighting matrix. The least squares results obtained with the equivalent diagonal matrices and those given by the fully populated covariance matrix are mathematically strictly equivalent for the mean estimator in terms of estimate and its a priori cofactor matrix. It is shown that this equivalence can be empirically extended to further classes of design matrices such as those used in GPS positioning (single point positioning, precise point positioning or relative positioning with double differences). Applying this new model to simulated time series of correlated observations, a significant reduction of the coordinate differences compared with the solutions computed with the commonly used diagonal elevation-dependent model was reached for the GPS relative positioning with double differences, single point positioning as well as precise point positioning cases. The estimate differences between the equivalent and classical model with fully populated covariance matrix were below the mm for all simulated GPS cases and below the sub-mm for the relative positioning with double differences. These results were confirmed by analyzing real data. Consequently, the equivalent diagonal covariance matrices, compared with the often used elevation

  6. Hermitian tridiagonal solution with the least norm to quaternionic least squares problem

    NASA Astrophysics Data System (ADS)

    Ling, Sitao; Wang, Minghui; Wei, Musheng

    2010-03-01

    Quaternionic least squares (QLS) is an efficient method for solving approximate problems in quaternionic quantum theory. In view of the extensive applications of Hermitian tridiagonal matrices in physics, in this paper we list some properties of basis matrices and subvectors related to tridiagonal matrices, and give an iterative algorithm for finding Hermitian tridiagonal solution with the least norm to the quaternionic least squares problem by making the best use of structure of real representation matrices, we also propose a preconditioning strategy for the Algorithm LSQR-Q in Wang, Wei and Feng (2008) [14] and our algorithm. Numerical experiments are provided to verify the effectiveness of our method.

  7. The least-squares mixing models to generate fraction images derived from remote sensing multispectral data

    NASA Technical Reports Server (NTRS)

    Shimabukuro, Yosio Edemir; Smith, James A.

    1991-01-01

    Constrained-least-squares and weighted-least-squares mixing models for generating fraction images derived from remote sensing multispectral data are presented. An experiment considering three components within the pixels-eucalyptus, soil (understory), and shade-was performed. The generated fraction images for shade (shade image) derived from these two methods were compared by considering the performance and computer time. The derived shade images are related to the observed variation in forest structure, i.e., the fraction of inferred shade in the pixel is related to different eucalyptus ages.

  8. First-Order System Least-Squares for the Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Bochev, P.; Cai, Z.; Manteuffel, T. A.; McCormick, S. F.

    1996-01-01

    This paper develops a least-squares approach to the solution of the incompressible Navier-Stokes equations in primitive variables. As with our earlier work on Stokes equations, we recast the Navier-Stokes equations as a first-order system by introducing a velocity flux variable and associated curl and trace equations. We show that the resulting system is well-posed, and that an associated least-squares principle yields optimal discretization error estimates in the H(sup 1) norm in each variable (including the velocity flux) and optimal multigrid convergence estimates for the resulting algebraic system.

  9. Landsat-4 (TDRSS-user) orbit determination using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

    1992-01-01

    TDRSS user orbit determination is analyzed using a batch least-squares method and a sequential estimation method. It was found that in the batch least-squares method analysis, the orbit determination consistency for Landsat-4, which was heavily tracked by TDRSS during January 1991, was about 4 meters in the rms overlap comparisons and about 6 meters in the maximum position differences in overlap comparisons. The consistency was about 10 to 30 meters in the 3 sigma state error covariance function in the sequential method analysis. As a measure of consistency, the first residual of each pass was within the 3 sigma bound in the residual space.

  10. Least-squares streamline diffusion finite element approximations to singularly perturbed convection-diffusion problems

    SciTech Connect

    Lazarov, R D; Vassilevski, P S

    1999-05-06

    In this paper we introduce and study a least-squares finite element approximation for singularly perturbed convection-diffusion equations of second order. By introducing the flux (diffusive plus convective) as a new unknown, the problem is written in a mixed form as a first order system. Further, the flux is augmented by adding the lower order terms with a small parameter. The new first order system is approximated by the least-squares finite element method using the minus one norm approach of Bramble, Lazarov, and Pasciak [2]. Further, we estimate the error of the method and discuss its implementation and the numerical solution of some test problems.

  11. Robust analysis of trends in noisy tokamak confinement data using geodesic least squares regression

    NASA Astrophysics Data System (ADS)

    Verdoolaege, G.; Shabbir, A.; Hornung, G.

    2016-11-01

    Regression analysis is a very common activity in fusion science for unveiling trends and parametric dependencies, but it can be a difficult matter. We have recently developed the method of geodesic least squares (GLS) regression that is able to handle errors in all variables, is robust against data outliers and uncertainty in the regression model, and can be used with arbitrary distribution models and regression functions. We here report on first results of application of GLS to estimation of the multi-machine scaling law for the energy confinement time in tokamaks, demonstrating improved consistency of the GLS results compared to standard least squares.

  12. Explicit least squares system parameter identification for exact differential input/output models

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1993-01-01

    The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.

  13. L2CXCV: A Fortran 77 package for least squares convex/concave data smoothing

    NASA Astrophysics Data System (ADS)

    Demetriou, I. C.

    2006-04-01

    Fortran 77 software is given for least squares smoothing to data values contaminated by random errors subject to one sign change in the second divided differences of the smoothed values, where the location of the sign change is also unknown of the optimization problem. A highly useful description of the constraints is that they follow from the assumption of initially increasing and subsequently decreasing rates of change, or vice versa, of the process considered. The underlying algorithm partitions the data into two disjoint sets of adjacent data and calculates the required fit by solving a strictly convex quadratic programming problem for each set. The piecewise linear interpolant to the fit is convex on the first set and concave on the other one. The partition into suitable sets is achieved by a finite iterative algorithm, which is made quite efficient because of the interactions of the quadratic programming problems on consecutive data. The algorithm obtains the solution by employing no more quadratic programming calculations over subranges of data than twice the number of the divided differences constraints. The quadratic programming technique makes use of active sets and takes advantage of a B-spline representation of the smoothed values that allows some efficient updating procedures. The entire code required to implement the method is 2920 Fortran lines. The package has been tested on a variety of data sets and it has performed very efficiently, terminating in an overall number of active set changes over subranges of data that is only proportional to the number of data. The results suggest that the package can be used for very large numbers of data values. Some examples with output are provided to help new users and exhibit certain features of the software. Important applications of the smoothing technique may be found in calculating a sigmoid approximation, which is a common topic in various contexts in applications in disciplines like physics, economics

  14. Using Technology to Optimize and Generalize: The Least-Squares Line

    ERIC Educational Resources Information Center

    Burke, Maurice J.; Hodgson, Ted R.

    2007-01-01

    With the help of technology and a basic high school algebra method for finding the vertex of a quadratic polynomial, students can develop and prove the formula for least-squares lines. Students are exposed to the power of a computer algebra system to generalize processes they understand and to see deeper patterns in those processes. (Contains 4…

  15. An Extension of Least Squares Estimation of IRT Linking Coefficients for the Graded Response Model

    ERIC Educational Resources Information Center

    Kim, Seonghoon

    2010-01-01

    The three types (generalized, unweighted, and weighted) of least squares methods, proposed by Ogasawara, for estimating item response theory (IRT) linking coefficients under dichotomous models are extended to the graded response model. A simulation study was conducted to confirm the accuracy of the extended formulas, and a real data study was…

  16. A Comparison of Mean Phase Difference and Generalized Least Squares for Analyzing Single-Case Data

    ERIC Educational Resources Information Center

    Manolov, Rumen; Solanas, Antonio

    2013-01-01

    The present study focuses on single-case data analysis specifically on two procedures for quantifying differences between baseline and treatment measurements. The first technique tested is based on generalized least square regression analysis and is compared to a proposed non-regression technique, which allows obtaining similar information. The…

  17. Conjunctive and Disjunctive Extensions of the Least Squares Distance Model of Cognitive Diagnosis

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.; Atanasov, Dimitar V.

    2012-01-01

    Many models of cognitive diagnosis, including the "least squares distance model" (LSDM), work under the "conjunctive" assumption that a correct item response occurs when all latent attributes required by the item are correctly performed. This article proposes a "disjunctive" version of the LSDM under which the correct item response occurs when "at…

  18. On the use of a penalized least squares method to process kinematic full-field measurements

    NASA Astrophysics Data System (ADS)

    Moulart, Raphaël; Rotinat, René

    2014-07-01

    This work is aimed at exploring the performances of an alternative procedure to smooth and differentiate full-field displacement measurements. After recalling the strategies currently used by the experimental mechanics community, a short overview of the available smoothing algorithms is drawn up and the requirements that such an algorithm has to fulfil to be applicable to process kinematic measurements are listed. A comparative study of the chosen algorithm is performed including the 2D penalized least squares method and two other commonly implemented strategies. The results obtained by penalized least squares are comparable in terms of quality to those produced by the two other algorithms, while the penalized least squares method appears to be the fastest and the most flexible. Unlike both the other considered methods, it is possible with penalized least squares to automatically choose the parameter governing the amount of smoothing to apply. Unfortunately, it appears that this automation is not suitable for the proposed application since it does not lead to optimal strain maps. Finally, it is possible with this technique to perform the derivation to obtain strain maps before smoothing them (while the smoothing is normally applied to displacement maps before the differentiation), which can lead in some cases to a more effective reconstruction of the strain fields.

  19. A Geometric Analysis of when Fixed Weighting Schemes Will Outperform Ordinary Least Squares

    ERIC Educational Resources Information Center

    Davis-Stober, Clintin P.

    2011-01-01

    Many researchers have demonstrated that fixed, exogenously chosen weights can be useful alternatives to Ordinary Least Squares (OLS) estimation within the linear model (e.g., Dawes, Am. Psychol. 34:571-582, 1979; Einhorn & Hogarth, Org. Behav. Human Perform. 13:171-192, 1975; Wainer, Psychol. Bull. 83:213-217, 1976). Generalizing the approach of…

  20. Analyzing Multilevel Data: Comparing Findings from Hierarchical Linear Modeling and Ordinary Least Squares Regression

    ERIC Educational Resources Information Center

    Rocconi, Louis M.

    2013-01-01

    This study examined the differing conclusions one may come to depending upon the type of analysis chosen, hierarchical linear modeling or ordinary least squares (OLS) regression. To illustrate this point, this study examined the influences of seniors' self-reported critical thinking abilities three ways: (1) an OLS regression with the student…

  1. The Use of Orthogonal Distances in Generating the Total Least Squares Estimate

    ERIC Educational Resources Information Center

    Glaister, P.

    2005-01-01

    The method of least squares enables the determination of an estimate of the slope and intercept of a straight line relationship between two quantities or variables X and Y. Although a theoretical relationship may exist between X and Y of the form Y = mX + c, in practice experimental or measurement errors will occur, and the observed or measured…

  2. Investigating Importance Weighting of Satisfaction Scores from a Formative Model with Partial Least Squares Analysis

    ERIC Educational Resources Information Center

    Wu, Chia-Huei; Chen, Lung Hung; Tsai, Ying-Mei

    2009-01-01

    This study introduced a formative model to investigate the utility of importance weighting on satisfaction scores with partial least squares analysis. Based on the bottom-up theory of satisfaction evaluations, the measurement structure for weighted/unweighted domain satisfaction scores was modeled as a formative model, whereas the measurement…

  3. An Alternative Two Stage Least Squares (2SLS) Estimator for Latent Variable Equations.

    ERIC Educational Resources Information Center

    Bollen, Kenneth A.

    1996-01-01

    An alternative two-stage least squares (2SLS) estimator of the parameters in LISREL type models is proposed and contrasted with existing estimators. The new 2SLS estimator allows observed and latent variables to originate from nonnormal distributions, is consistent, has a known asymptotic covariance matrix, and can be estimated with standard…

  4. A negative-norm least squares method for Reissner-Mindlin plates

    NASA Astrophysics Data System (ADS)

    Bramble, J. H.; Sun, T.

    1998-07-01

    In this paper a least squares method, using the minus one norm developed by Bramble, Lazarov, and Pasciak, is introduced to approximate the solution of the Reissner-Mindlin plate problem with small parameter t, the thickness of the plate. The reformulation of Brezzi and Fortin is employed to prevent locking. Taking advantage of the least squares approach, we use only continuous finite elements for all the unknowns. In particular, we may use continuous linear finite elements. The difficulty of satisfying the inf-sup condition is overcome by the introduction of a stabilization term into the least squares bilinear form, which is very cheap computationally. It is proved that the error of the discrete solution is optimal with respect to regularity and uniform with respect to the parameter t. Apart from the simplicity of the elements, the stability theorem gives a natural block diagonal preconditioner of the resulting least squares system. For each diagonal block, one only needs a preconditioner for a second order elliptic problem.

  5. Noise suppression using preconditioned least-squares prestack time migration: application to the Mississippian limestone

    NASA Astrophysics Data System (ADS)

    Guo, Shiguang; Zhang, Bo; Wang, Qing; Cabrales-Vargas, Alejandro; Marfurt, Kurt J.

    2016-08-01

    Conventional Kirchhoff migration often suffers from artifacts such as aliasing and acquisition footprint, which come from sub-optimal seismic acquisition. The footprint can mask faults and fractures, while aliased noise can focus into false coherent events which affect interpretation and contaminate amplitude variation with offset, amplitude variation with azimuth and elastic inversion. Preconditioned least-squares migration minimizes these artifacts. We implement least-squares migration by minimizing the difference between the original data and the modeled demigrated data using an iterative conjugate gradient scheme. Unpreconditioned least-squares migration better estimates the subsurface amplitude, but does not suppress aliasing. In this work, we precondition the results by applying a 3D prestack structure-oriented LUM (lower–upper–middle) filter to each common offset and common azimuth gather at each iteration. The preconditioning algorithm not only suppresses aliasing of both signal and noise, but also improves the convergence rate. We apply the new preconditioned least-squares migration to the Marmousi model and demonstrate how it can improve the seismic image compared with conventional migration, and then apply it to one survey acquired over a new resource play in the Mid-Continent, USA. The acquisition footprint from the targets is attenuated and the signal to noise ratio is enhanced. To demonstrate the impact on interpretation, we generate a suite of seismic attributes to image the Mississippian limestone, and show that the karst-enhanced fractures in the Mississippian limestone can be better illuminated.

  6. Noise suppression using preconditioned least-squares prestack time migration: application to the Mississippian limestone

    NASA Astrophysics Data System (ADS)

    Guo, Shiguang; Zhang, Bo; Wang, Qing; Cabrales-Vargas, Alejandro; Marfurt, Kurt J.

    2016-08-01

    Conventional Kirchhoff migration often suffers from artifacts such as aliasing and acquisition footprint, which come from sub-optimal seismic acquisition. The footprint can mask faults and fractures, while aliased noise can focus into false coherent events which affect interpretation and contaminate amplitude variation with offset, amplitude variation with azimuth and elastic inversion. Preconditioned least-squares migration minimizes these artifacts. We implement least-squares migration by minimizing the difference between the original data and the modeled demigrated data using an iterative conjugate gradient scheme. Unpreconditioned least-squares migration better estimates the subsurface amplitude, but does not suppress aliasing. In this work, we precondition the results by applying a 3D prestack structure-oriented LUM (lower-upper-middle) filter to each common offset and common azimuth gather at each iteration. The preconditioning algorithm not only suppresses aliasing of both signal and noise, but also improves the convergence rate. We apply the new preconditioned least-squares migration to the Marmousi model and demonstrate how it can improve the seismic image compared with conventional migration, and then apply it to one survey acquired over a new resource play in the Mid-Continent, USA. The acquisition footprint from the targets is attenuated and the signal to noise ratio is enhanced. To demonstrate the impact on interpretation, we generate a suite of seismic attributes to image the Mississippian limestone, and show that the karst-enhanced fractures in the Mississippian limestone can be better illuminated.

  7. ON ASYMPTOTIC DISTRIBUTION AND ASYMPTOTIC EFFICIENCY OF LEAST SQUARES ESTIMATORS OF SPATIAL VARIOGRAM PARAMETERS. (R827257)

    EPA Science Inventory

    Abstract

    In this article, we consider the least-squares approach for estimating parameters of a spatial variogram and establish consistency and asymptotic normality of these estimators under general conditions. Large-sample distributions are also established under a sp...

  8. Linking Socioeconomic Status to Social Cognitive Career Theory Factors: A Partial Least Squares Path Modeling Analysis

    ERIC Educational Resources Information Center

    Huang, Jie-Tsuen; Hsieh, Hui-Hsien

    2011-01-01

    The purpose of this study was to investigate the contributions of socioeconomic status (SES) in predicting social cognitive career theory (SCCT) factors. Data were collected from 738 college students in Taiwan. The results of the partial least squares (PLS) analyses indicated that SES significantly predicted career decision self-efficacy (CDSE);…

  9. Assessing Compliance-Effect Bias in the Two Stage Least Squares Estimator

    ERIC Educational Resources Information Center

    Reardon, Sean; Unlu, Fatih; Zhu, Pei; Bloom, Howard

    2011-01-01

    The proposed paper studies the bias in the two-stage least squares, or 2SLS, estimator that is caused by the compliance-effect covariance (hereafter, the compliance-effect bias). It starts by deriving the formula for the bias in an infinite sample (i.e., in the absence of finite sample bias) under different circumstances. Specifically, it…

  10. Interpreting the Results of Weighted Least-Squares Regression: Caveats for the Statistical Consumer.

    ERIC Educational Resources Information Center

    Willett, John B.; Singer, Judith D.

    In research, data sets often occur in which the variance of the distribution of the dependent variable at given levels of the predictors is a function of the values of the predictors. In this situation, the use of weighted least-squares (WLS) or techniques is required. Weights suitable for use in a WLS regression analysis must be estimated. A…

  11. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile intervals, and…

  12. Revisiting the Least-squares Procedure for Gradient Reconstruction on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Thomas, James L. (Technical Monitor)

    2003-01-01

    The accuracy of the least-squares technique for gradient reconstruction on unstructured meshes is examined. While least-squares techniques produce accurate results on arbitrary isotropic unstructured meshes, serious difficulties exist for highly stretched meshes in the presence of surface curvature. In these situations, gradients are typically under-estimated by up to an order of magnitude. For vertex-based discretizations on triangular and quadrilateral meshes, and cell-centered discretizations on quadrilateral meshes, accuracy can be recovered using an inverse distance weighting in the least-squares construction. For cell-centered discretizations on triangles, both the unweighted and weighted least-squares constructions fail to provide suitable gradient estimates for highly stretched curved meshes. Good overall flow solution accuracy can be retained in spite of poor gradient estimates, due to the presence of flow alignment in exactly the same regions where the poor gradient accuracy is observed. However, the use of entropy fixes has the potential for generating large but subtle discretization errors.

  13. Cross-term free based bistatic radar system using sparse least squares

    NASA Astrophysics Data System (ADS)

    Sevimli, R. Akin; Cetin, A. Enis

    2015-05-01

    Passive Bistatic Radar (PBR) systems use illuminators of opportunity, such as FM, TV, and DAB broadcasts. The most common illuminator of opportunity used in PBR systems is the FM radio stations. Single FM channel based PBR systems do not have high range resolution and may turn out to be noisy. In order to enhance the range resolution of the PBR systems algorithms using several FM channels at the same time are proposed. In standard methods, consecutive FM channels are translated to baseband as is and fed to the matched filter to compute the range-Doppler map. Multichannel FM based PBR systems have better range resolution than single channel systems. However superious sidelobe peaks occur as a side effect. In this article, we linearly predict the surveillance signal using the modulated and delayed reference signal components. We vary the modulation frequency and the delay to cover the entire range-Doppler plane. Whenever there is a target at a specific range value and Doppler value the prediction error is minimized. The cost function of the linear prediction equation has three components. The first term is the real-part of the ordinary least squares term, the second-term is the imaginary part of the least squares and the third component is the l2-norm of the prediction coefficients. Separate minimization of real and imaginary parts reduces the side lobes and decrease the noise level of the range-Doppler map. The third term enforces the sparse solution on the least squares problem. We experimentally observed that this approach is better than both the standard least squares and other sparse least squares approaches in terms of side lobes. Extensive simulation examples will be presented in the final form of the paper.

  14. LCLSQ: an implementation of an algorithm for linearly constrained linear least-squares problems. [For IBM 370 and 3033, in FORTRAN

    SciTech Connect

    Crane, R L; Garbow, B S; Hillstrom, K E; Minkoff, M

    1980-11-01

    This report describes the implementation of an algorithm of Stoer and Schittkowski for solving linearly constrained linear least-squares problems. These problems arise in many areas, particularly in data fitting where a model is provided and parameters in the model are selected to be a best least-squares fit to known experimental observations. By adding constraints to the least-squares fit, one can force user-specified properties on the parameters selected. The algorithm used applies a numerically stable implementation of the Gram-Schmidt orthogonalization procedure to deal with a factorization approach for solving the constrained least-squares problem. The software developed allows for either a user-supplied feasible starting point or the automatic generation of a feasible starting point, redecomposition after solving the problem to improve numerical accuracy, and diagnostic printout to follow the computations in the algorithm. In addition to a description of the actual method used to solve the problem, a description of the software structure and the user interfaces is provided, along with a numerical example. 3 figures, 1 table.

  15. Detection of glutamic acid in oilseed rape leaves using near infrared spectroscopy and the least squares-support vector machine.

    PubMed

    Bao, Yidan; Kong, Wenwen; Liu, Fei; Qiu, Zhengjun; He, Yong

    2012-01-01

    Amino acids are quite important indices to indicate the growth status of oilseed rape under herbicide stress. Near infrared (NIR) spectroscopy combined with chemometrics was applied for fast determination of glutamic acid in oilseed rape leaves. The optimal spectral preprocessing method was obtained after comparing Savitzky-Golay smoothing, standard normal variate, multiplicative scatter correction, first and second derivatives, detrending and direct orthogonal signal correction. Linear and nonlinear calibration methods were developed, including partial least squares (PLS) and least squares-support vector machine (LS-SVM). The most effective wavelengths (EWs) were determined by the successive projections algorithm (SPA), and these wavelengths were used as the inputs of PLS and LS-SVM model. The best prediction results were achieved by SPA-LS-SVM (Raw) model with correlation coefficient r = 0.9943 and root mean squares error of prediction (RMSEP) = 0.0569 for prediction set. These results indicated that NIR spectroscopy combined with SPA-LS-SVM was feasible for the fast and effective detection of glutamic acid in oilseed rape leaves. The selected EWs could be used to develop spectral sensors, and the important and basic amino acid data were helpful to study the function mechanism of herbicide.

  16. Inline Measurement of Particle Concentrations in Multicomponent Suspensions using Ultrasonic Sensor and Least Squares Support Vector Machines

    PubMed Central

    Zhan, Xiaobin; Jiang, Shulan; Yang, Yili; Liang, Jian; Shi, Tielin; Li, Xiwen

    2015-01-01

    This paper proposes an ultrasonic measurement system based on least squares support vector machines (LS-SVM) for inline measurement of particle concentrations in multicomponent suspensions. Firstly, the ultrasonic signals are analyzed and processed, and the optimal feature subset that contributes to the best model performance is selected based on the importance of features. Secondly, the LS-SVM model is tuned, trained and tested with different feature subsets to obtain the optimal model. In addition, a comparison is made between the partial least square (PLS) model and the LS-SVM model. Finally, the optimal LS-SVM model with the optimal feature subset is applied to inline measurement of particle concentrations in the mixing process. The results show that the proposed method is reliable and accurate for inline measuring the particle concentrations in multicomponent suspensions and the measurement accuracy is sufficiently high for industrial application. Furthermore, the proposed method is applicable to the modeling of the nonlinear system dynamically and provides a feasible way to monitor industrial processes. PMID:26393611

  17. Kinetic approach for the enzymatic determination of levodopa and carbidopa assisted by multivariate curve resolution-alternating least squares.

    PubMed

    Grünhut, Marcos; Garrido, Mariano; Centurión, Maria E; Fernández Band, Beatriz S

    2010-07-12

    A combination of kinetic spectroscopic monitoring and multivariate curve resolution-alternating least squares (MCR-ALS) was proposed for the enzymatic determination of levodopa (LVD) and carbidopa (CBD) in pharmaceuticals. The enzymatic reaction process was carried out in a reverse stopped-flow injection system and monitored by UV-vis spectroscopy. The spectra (292-600 nm) were recorded throughout the reaction and were analyzed by multivariate curve resolution-alternating least squares. A small calibration matrix containing nine mixtures was used in the model construction. Additionally, to evaluate the prediction ability of the model, a set with six validation mixtures was used. The lack of fit obtained was 4.3%, the explained variance 99.8% and the overall prediction error 5.5%. Tablets of commercial samples were analyzed and the results were validated by pharmacopeia method (high performance liquid chromatography). No significant differences were found (alpha=0.05) between the reference values and the ones obtained with the proposed method. It is important to note that a unique chemometric model made it possible to determine both analytes simultaneously. PMID:20630175

  18. Linear least-squares method for unbiased estimation of T1 from SPGR signals.

    PubMed

    Chang, Lin-Ching; Koay, Cheng Guan; Basser, Peter J; Pierpaoli, Carlo

    2008-08-01

    The longitudinal relaxation time, T(1), can be estimated from two or more spoiled gradient recalled echo images (SPGR) acquired with different flip angles and/or repetition times (TRs). The function relating signal intensity to flip angle and TR is nonlinear; however, a linear form proposed 30 years ago is currently widely used. Here we show that this linear method provides T(1) estimates that have similar precision but lower accuracy than those obtained with a nonlinear method. We also show that T(1) estimated by the linear method is biased due to improper accounting for noise in the fitting. This bias can be significant for clinical SPGR images; for example, T(1) estimated in brain tissue (800 ms < T(1) < 1600 ms) can be overestimated by 10% to 20%. We propose a weighting scheme that correctly accounts for the noise contribution in the fitting procedure. Monte Carlo simulations of SPGR experiments are used to evaluate the accuracy of the estimated T(1) from the widely-used linear, the proposed weighted-uncertainty linear, and the nonlinear methods. We show that the linear method with weighted uncertainties reduces the bias of the linear method, providing T(1) estimates comparable in precision and accuracy to those of the nonlinear method while reducing computation time significantly. PMID:18666108

  19. Model updating of rotor systems by using Nonlinear least square optimization

    NASA Astrophysics Data System (ADS)

    Jha, A. K.; Dewangan, P.; Sarangi, M.

    2016-07-01

    Mathematical models of structure or machineries are always different from the existing physical system, because the approach of numerical predictions to the behavior of a physical system is limited by the assumptions used in the development of the mathematical model. Model updating is, therefore necessary so that updated model should replicate the physical system. This work focuses on the model updating of rotor systems at various speeds as well as at different modes of vibration. Support bearing characteristics severely influence the dynamics of rotor systems like turbines, compressors, pumps, electrical machines, machine tool spindles etc. Therefore bearing parameters (stiffness and damping) are considered to be updating parameters. A finite element model of rotor systems is developed using Timoshenko beam element. Unbalance response in time domain and frequency response function have been calculated by numerical techniques, and compared with the experimental data to update the FE-model of rotor systems. An algorithm, based on unbalance response in time domain is proposed for updating the rotor systems at different running speeds of rotor. An attempt has been made to define Unbalance response assurance criterion (URAC) to check the degree of correlation between updated FE model and physical model.

  20. Application of nonlinear least squares methods to the analysis of solar spectra

    NASA Technical Reports Server (NTRS)

    Shaw, J. H.

    1985-01-01

    A fast method of retrieving vertical temperature profiles in the atmosphere and of determining the paths of the rays producing the ATMOS occultation spectra has been developed. The results from one set of occultation data appear to be consistent with other available data. A study of sources of error, a search for other suitable features for measurement in the spectra, and modification of the program to obtain mixing ratio profiles have been initiated.

  1. Partial least squares regression on DCT domain for infrared face recognition

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua

    2014-09-01

    Compact and discriminative feature extraction is a challenging task for infrared face recognition. In this paper, we propose an infrared face recognition method using Partial Least Square (PLS) regression on Discrete Cosine Transform (DCT) coefficients. With the strong ability for data de-correlation and compact energy, DCT is studied to get the compact features in infrared face. To dig out discriminative information in DCT coefficients, class-specific One-to-Rest Partial Least Squares (PLS) classifier is learned for accurate classification. The infrared data were collected by an infrared camera Thermo Vision A40 supplied by FLIR Systems Inc. The experimental results show that the recognition rate of the proposed algorithm can reach 95.8%, outperforms that of the state of art infrared face recognition methods based on Linear Discriminant Analysis (LDA) and DCT.

  2. Accuracy of least-squares methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bochev, Pavel B.; Gunzburger, Max D.

    1993-01-01

    Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.

  3. Fractional Order Digital Differentiator Design Based on Power Function and Least squares

    NASA Astrophysics Data System (ADS)

    Kumar, Manjeet; Rawat, Tarun Kumar

    2016-10-01

    In this article, we propose the use of power function and least squares method for designing of a fractional order digital differentiator. The input signal is transformed into a power function by using Taylor series expansion, and its fractional derivative is computed using the Grunwald-Letnikov (G-L) definition. Next, the fractional order digital differentiator is modelled as a finite impulse response (FIR) system that yields fractional order derivative of the G-L type for a power function. The FIR system coefficients are obtained by using the least squares method. Two examples are used to demonstrate that the fractional derivative of the digital signals is computed by using the proposed technique. The results of the third and fourth examples reveal that the proposed technique gives superior performance in comparison with the existing techniques.

  4. Real-Time Adaptive Least-Squares Drag Minimization for Performance Adaptive Aeroelastic Wing

    NASA Technical Reports Server (NTRS)

    Ferrier, Yvonne L.; Nguyen, Nhan T.; Ting, Eric

    2016-01-01

    This paper contains a simulation study of a real-time adaptive least-squares drag minimization algorithm for an aeroelastic model of a flexible wing aircraft. The aircraft model is based on the NASA Generic Transport Model (GTM). The wing structures incorporate a novel aerodynamic control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF). The drag minimization algorithm uses the Newton-Raphson method to find the optimal VCCTEF deflections for minimum drag in the context of an altitude-hold flight control mode at cruise conditions. The aerodynamic coefficient parameters used in this optimization method are identified in real-time using Recursive Least Squares (RLS). The results demonstrate the potential of the VCCTEF to improve aerodynamic efficiency for drag minimization for transport aircraft.

  5. Geodesic least squares regression for scaling studies in magnetic confinement fusion

    SciTech Connect

    Verdoolaege, Geert

    2015-01-13

    In regression analyses for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. However, concerns have been raised with respect to several assumptions underlying OLS in its application to scaling laws. We here discuss a new regression method that is robust in the presence of significant uncertainty on both the data and the regression model. The method, which we call geodesic least squares regression (GLS), is based on minimization of the Rao geodesic distance on a probabilistic manifold. We demonstrate the superiority of the method using synthetic data and we present an application to the scaling law for the power threshold for the transition to the high confinement regime in magnetic confinement fusion devices.

  6. Weighted linear least squares problem: an interval analysis approach to rank determination

    SciTech Connect

    Manteuffel, T. A.

    1980-08-01

    This is an extension of the work in SAND--80-0655 to the weighted linear least squares problem. Given the weighted linear least squares problem WAx approx. = Wb, where W is a diagonal weighting matrix, and bounds on the uncertainty in the elements of A, we define an interval matrix A/sup I/ that contains all perturbations of A due to these uncertainties and say that the problem is rank deficient if any member of A/sup I/ is rank deficient. It is shown that, if WA = QR is the QR decomposition of WA, then Q and R/sup -1/ can be used to bound the rank of A/sup I/. A modification of the Modified Gram--Schmidt QR decomposition yields an algorithm that implements these results. The extra arithmetic is 0(MN). Numerical results show the algorithm to be effective on problems in which the weights vary greatly in magnitude.

  7. Difference mapping method using least square support vector regression for variable-fidelity metamodelling

    NASA Astrophysics Data System (ADS)

    Zheng, Jun; Shao, Xinyu; Gao, Liang; Jiang, Ping; Qiu, Haobo

    2015-06-01

    Engineering design, especially for complex engineering systems, is usually a time-consuming process involving computation-intensive computer-based simulation and analysis methods. A difference mapping method using least square support vector regression is developed in this work, as a special metamodelling methodology that includes variable-fidelity data, to replace the computationally expensive computer codes. A general difference mapping framework is proposed where a surrogate base is first created, then the approximation is gained by a mapping the difference between the base and the real high-fidelity response surface. The least square support vector regression is adopted to accomplish the mapping. Two different sampling strategies, nested and non-nested design of experiments, are conducted to explore their respective effects on modelling accuracy. Different sample sizes and three approximation performance measures of accuracy are considered.

  8. Improved least squares MR image reconstruction using estimates of k-space data consistency.

    PubMed

    Johnson, Kevin M; Block, Walter F; Reeder, Scott B; Samsonov, Alexey

    2012-06-01

    This study describes a new approach to reconstruct data that has been corrupted by unfavorable magnetization evolution. In this new framework, images are reconstructed in a weighted least squares fashion using all available data and a measure of consistency determined from the data itself. The reconstruction scheme optimally balances uncertainties from noise error with those from data inconsistency, is compatible with methods that model signal corruption, and may be advantageous for more accurate and precise reconstruction with many least squares-based image estimation techniques including parallel imaging and constrained reconstruction/compressed sensing applications. Performance of the several variants of the algorithm tailored for fast spin echo and self-gated respiratory gating applications was evaluated in simulations, phantom experiments, and in vivo scans. The data consistency weighting technique substantially improved image quality and reduced noise as compared to traditional reconstruction approaches.

  9. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    NASA Astrophysics Data System (ADS)

    Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing

    2015-06-01

    In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  10. Improved Least Squares MR Image Reconstruction Using Estimates of k-Space Data Consistency

    PubMed Central

    Johnson, Kevin M.; Block, Walter F.; Reeder, Scott. B.; Samsonov, Alexey

    2011-01-01

    This work describes a new approach to reconstruct data that has been corrupted by unfavorable magnetization evolution. In this new framework, images are reconstructed in a weighted least squares fashion using all available data and a measure of consistency determined from the data itself. The reconstruction scheme optimally balances uncertainties from noise error with those from data inconsistency, is compatible with methods that model signal corruption, and may be advantageous for more accurate and precise reconstruction with many least-squares based image estimation techniques including parallel imaging and constrained reconstruction/compressed sensing applications. Performance of the several variants of the algorithm tailored for fast spin echo (FSE) and self gated respiratory gating applications was evaluated in simulations, phantom experiments, and in-vivo scans. The data consistency weighting technique substantially improved image quality and reduced noise as compared to traditional reconstruction approaches. PMID:22135155

  11. L2CXCV: A Fortran 77 package for least squares convex/concave data smoothing

    NASA Astrophysics Data System (ADS)

    Demetriou, I. C.

    2006-04-01

    Fortran 77 software is given for least squares smoothing to data values contaminated by random errors subject to one sign change in the second divided differences of the smoothed values, where the location of the sign change is also unknown of the optimization problem. A highly useful description of the constraints is that they follow from the assumption of initially increasing and subsequently decreasing rates of change, or vice versa, of the process considered. The underlying algorithm partitions the data into two disjoint sets of adjacent data and calculates the required fit by solving a strictly convex quadratic programming problem for each set. The piecewise linear interpolant to the fit is convex on the first set and concave on the other one. The partition into suitable sets is achieved by a finite iterative algorithm, which is made quite efficient because of the interactions of the quadratic programming problems on consecutive data. The algorithm obtains the solution by employing no more quadratic programming calculations over subranges of data than twice the number of the divided differences constraints. The quadratic programming technique makes use of active sets and takes advantage of a B-spline representation of the smoothed values that allows some efficient updating procedures. The entire code required to implement the method is 2920 Fortran lines. The package has been tested on a variety of data sets and it has performed very efficiently, terminating in an overall number of active set changes over subranges of data that is only proportional to the number of data. The results suggest that the package can be used for very large numbers of data values. Some examples with output are provided to help new users and exhibit certain features of the software. Important applications of the smoothing technique may be found in calculating a sigmoid approximation, which is a common topic in various contexts in applications in disciplines like physics, economics

  12. Least squares adjustment of large-scale geodetic networks by orthogonal decomposition

    SciTech Connect

    George, J.A.; Golub, G.H.; Heath, M.T.; Plemmons, R.J.

    1981-11-01

    This article reviews some recent developments in the solution of large sparse least squares problems typical of those arising in geodetic adjustment problems. The new methods are distinguished by their use of orthogonal transformations which tend to improve numerical accuracy over the conventional approach based on the use of the normal equations. The adaptation of these new schemes to allow for the use of auxiliary storage and their extension to rank deficient problems are also described.

  13. Least-Squares Data Adjustment with Rank-Deficient Data Covariance Matrices

    SciTech Connect

    Williams, J.G.

    2011-07-01

    A derivation of the linear least-squares adjustment formulae is required that avoids the assumption that the covariance matrix of prior parameters can be inverted. Possible proofs are of several kinds, including: (i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. In this paper, the least-squares adjustment equations are derived in both these ways, while explicitly assuming that the covariance matrix of prior parameters is singular. It will be proved that the solutions are unique and that, contrary to statements that have appeared in the literature, the least-squares adjustment problem is not ill-posed. No modification is required to the adjustment formulae that have been used in the past in the case of a singular covariance matrix for the priors. In conclusion: The linear least-squares adjustment formula that has been used in the past is valid in the case of a singular covariance matrix for the covariance matrix of prior parameters. Furthermore, it provides a unique solution. Statements in the literature, to the effect that the problem is ill-posed are wrong. No regularization of the problem is required. This has been proved in the present paper by two methods, while explicitly assuming that the covariance matrix of prior parameters is singular: i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. No modification is needed to the adjustment formulae that have been used in the past. (author)

  14. Least-squares adjustment of triangles and quadrilaterals in which all angles and distances are observed.

    USGS Publications Warehouse

    Smith, W.K.; Varnes, D.J.

    1987-01-01

    The solutions for combined triangulation-trilateration least-squares adjustment of triangles and quadrilaterals, in which all angles and distances are observed, are developed. The triangle is adjusted by the method of observations only, and the quadrilateral is adjusted by the method of indirect observations. The problem of choosing the proper weight factor relating the angle measurements to the distance measurements is addressed. By changing the weight factor, the adjustment can vary from pure triangulation to pure trilateration.-from Authors

  15. On the efficiency of the orthogonal least squares training method for radial basis function networks.

    PubMed

    Sherstinsky, A; Picard, R W

    1996-01-01

    The efficiency of the orthogonal least squares (OLS) method for training approximation networks is examined using the criterion of energy compaction. We show that the selection of basis vectors produced by the procedure is not the most compact when the approximation is performed using a nonorthogonal basis. Hence, the algorithm does not produce the smallest possible networks for a given approximation error. Specific examples are given using the Gaussian radial basis functions type of approximation networks.

  16. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  17. Preprocessing in Matlab Inconsistent Linear System for a Meaningful Least Squares Solution

    NASA Technical Reports Server (NTRS)

    Sen, Symal K.; Shaykhian, Gholam Ali

    2011-01-01

    Mathematical models of many physical/statistical problems are systems of linear equations Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the . minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.

  18. Theoretical study of the incompressible Navier-Stokes equations by the least-squares method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Loh, Ching Y.; Povinelli, Louis A.

    1994-01-01

    Usually the theoretical analysis of the Navier-Stokes equations is conducted via the Galerkin method which leads to difficult saddle-point problems. This paper demonstrates that the least-squares method is a useful alternative tool for the theoretical study of partial differential equations since it leads to minimization problems which can often be treated by an elementary technique. The principal part of the Navier-Stokes equations in the first-order velocity-pressure-vorticity formulation consists of two div-curl systems, so the three-dimensional div-curl system is thoroughly studied at first. By introducing a dummy variable and by using the least-squares method, this paper shows that the div-curl system is properly determined and elliptic, and has a unique solution. The same technique then is employed to prove that the Stokes equations are properly determined and elliptic, and that four boundary conditions on a fixed boundary are required for three-dimensional problems. This paper also shows that under four combinations of non-standard boundary conditions the solution of the Stokes equations is unique. This paper emphasizes the application of the least-squares method and the div-curl method to derive a high-order version of differential equations and additional boundary conditions. In this paper, an elementary method (integration by parts) is used to prove Friedrichs' inequalities related to the div and curl operators which play an essential role in the analysis.

  19. Semi-supervised least squares support vector machine algorithm: application to offshore oil reservoir

    NASA Astrophysics Data System (ADS)

    Luo, Wei-Ping; Li, Hong-Qi; Shi, Ning

    2016-06-01

    At the early stages of deep-water oil exploration and development, fewer and further apart wells are drilled than in onshore oilfields. Supervised least squares support vector machine algorithms are used to predict the reservoir parameters but the prediction accuracy is low. We combined the least squares support vector machine (LSSVM) algorithm with semi-supervised learning and established a semi-supervised regression model, which we call the semi-supervised least squares support vector machine (SLSSVM) model. The iterative matrix inversion is also introduced to improve the training ability and training time of the model. We use the UCI data to test the generalization of a semi-supervised and a supervised LSSVM models. The test results suggest that the generalization performance of the LSSVM model greatly improves and with decreasing training samples the generalization performance is better. Moreover, for small-sample models, the SLSSVM method has higher precision than the semi-supervised K-nearest neighbor (SKNN) method. The new semisupervised LSSVM algorithm was used to predict the distribution of porosity and sandstone in the Jingzhou study area.

  20. A least-squares estimation approach to improving the precision of inverse dynamics computations.

    PubMed

    Kuo, A D

    1998-02-01

    A least-squares approach to computing inverse dynamics is proposed. The method utilizes equations of motion for a multi-segment body, incorporating terms for ground reaction forces and torques. The resulting system is overdetermined at each point in time, because kinematic and force measurements outnumber unknown torques, and may be solved using weighted least squares to yield estimates of the joint torques and joint angular accelerations that best match measured data. An error analysis makes it possible to predict error magnitudes for both conventional and least-squares methods. A modification of the method also makes it possible to reject constant biases such as those arising from misalignment of force plate and kinematic measurement reference frames. A benchmark case is presented, which demonstrates reductions in joint torque errors on the order of 30 percent compared to the conventional Newton-Euler method, for a wide range of noise levels on measured data. The advantages over the Newton-Euler method include making best use of all available measurements, ability to function when less than a full complement of ground reaction forces is measured, suppression of residual torques acting on the top-most body segment, and the rejection of constant biases in data.

  1. A simple suboptimal least-squares algorithm for attitude determination with multiple sensors

    NASA Technical Reports Server (NTRS)

    Brozenec, Thomas F.; Bender, Douglas J.

    1994-01-01

    Three-axis attitude determination is equivalent to finding a coordinate transformation matrix which transforms a set of reference vectors fixed in inertial space to a set of measurement vectors fixed in the spacecraft. The attitude determination problem can be expressed as a constrained optimization problem. The constraint is that a coordinate transformation matrix must be proper, real, and orthogonal. A transformation matrix can be thought of as optimal in the least-squares sense if it maps the measurement vectors to the reference vectors with minimal 2-norm errors and meets the above constraint. This constrained optimization problem is known as Wahba's problem. Several algorithms which solve Wahba's problem exactly have been developed and used. These algorithms, while steadily improving, are all rather complicated. Furthermore, they involve such numerically unstable or sensitive operations as matrix determinant, matrix adjoint, and Newton-Raphson iterations. This paper describes an algorithm which minimizes Wahba's loss function, but without the constraint. When the constraint is ignored, the problem can be solved by a straightforward, numerically stable least-squares algorithm such as QR decomposition. Even though the algorithm does not explicitly take the constraint into account, it still yields a nearly orthogonal matrix for most practical cases; orthogonality only becomes corrupted when the sensor measurements are very noisy, on the same order of magnitude as the attitude rotations. The algorithm can be simplified if the attitude rotations are small enough so that the approximation sin(theta) approximately equals theta holds. We then compare the computational requirements for several well-known algorithms. For the general large-angle case, the QR least-squares algorithm is competitive with all other know algorithms and faster than most. If attitude rotations are small, the least-squares algorithm can be modified to run faster, and this modified algorithm is

  2. Discrimination of healthy and osteoarthritic articular cartilages by Fourier transform infrared imaging and partial least squares-discriminant analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-Xi; Yin, Jian-Hua; Mao, Zhi-Hua; Xia, Yang

    2015-06-01

    Fourier transform infrared imaging (FTIRI) combined with chemometrics algorithm has strong potential to obtain complex chemical information from biology tissues. FTIRI and partial least squares-discriminant analysis (PLS-DA) were used to differentiate healthy and osteoarthritic (OA) cartilages for the first time. A PLS model was built on the calibration matrix of spectra that was randomly selected from the FTIRI spectral datasets of healthy and lesioned cartilage. Leave-one-out cross-validation was performed in the PLS model, and the fitting coefficient between actual and predicted categorical values of the calibration matrix reached 0.95. In the calibration and prediction matrices, the successful identifying percentages of healthy and lesioned cartilage spectra were 100% and 90.24%, respectively. These results demonstrated that FTIRI combined with PLS-DA could provide a promising approach for the categorical identification of healthy and OA cartilage specimens.

  3. A principal-component and least-squares method for allocating polycyclic aromatic hydrocarbons in sediment to multiple sources

    SciTech Connect

    Burns, W.A.; Mankiewicz, P.J.; Bence, A.E.; Page, D.S.; Parker, K.R.

    1997-06-01

    A method was developed to allocate polycyclic aromatic hydrocarbons (PAHs) in sediment samples to the PAH sources from which they came. The method uses principal-component analysis to identify possible sources and a least-squares model to find the source mix that gives the best fit of 36 PAH analytes in each sample. The method identified 18 possible PAH sources in a large set of field data collected in Prince William Sound, Alaska, USA, after the 1989 Exxon Valdez oil spill, including diesel oil, diesel soot, spilled crude oil in various weathering states, natural background, creosote, and combustion products from human activities and forest fires. Spill oil was generally found to be a small increment of the natural background in subtidal sediments, whereas combustion products were often the predominant sources for subtidal PAHs near sites of past or present human activity. The method appears to be applicable to other situations, including other spills.

  4. Performance improvement for optimization of the non-linear geometric fitting problem in manufacturing metrology

    NASA Astrophysics Data System (ADS)

    Moroni, Giovanni; Syam, Wahyudin P.; Petrò, Stefano

    2014-08-01

    Product quality is a main concern today in manufacturing; it drives competition between companies. To ensure high quality, a dimensional inspection to verify the geometric properties of a product must be carried out. High-speed non-contact scanners help with this task, by both speeding up acquisition speed and increasing accuracy through a more complete description of the surface. The algorithms for the management of the measurement data play a critical role in ensuring both the measurement accuracy and speed of the device. One of the most fundamental parts of the algorithm is the procedure for fitting the substitute geometry to a cloud of points. This article addresses this challenge. Three relevant geometries are selected as case studies: a non-linear least-squares fitting of a circle, sphere and cylinder. These geometries are chosen in consideration of their common use in practice; for example the sphere is often adopted as a reference artifact for performance verification of a coordinate measuring machine (CMM) and a cylinder is the most relevant geometry for a pin-hole relation as an assembly feature to construct a complete functioning product. In this article, an improvement of the initial point guess for the Levenberg-Marquardt (LM) algorithm by employing a chaos optimization (CO) method is proposed. This causes a performance improvement in the optimization of a non-linear function fitting the three geometries. The results show that, with this combination, a higher quality of fitting results a smaller norm of the residuals can be obtained while preserving the computational cost. Fitting an ‘incomplete-point-cloud’, which is a situation where the point cloud does not cover a complete feature e.g. from half of the total part surface, is also investigated. Finally, a case study of fitting a hemisphere is presented.

  5. Comparison of Response Surface Construction Methods for Derivative Estimation Using Moving Least Squares, Kriging and Radial Basis Functions

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2005-01-01

    Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.

  6. Combination of the Manifold Dimensionality Reduction Methods with Least Squares Support vector machines for Classifying the Species of Sorghum Seeds

    NASA Astrophysics Data System (ADS)

    Chen, Y. M.; Lin, P.; He, J. Q.; He, Y.; Li, X. L.

    2016-01-01

    This study was carried out for rapid and noninvasive determination of the class of sorghum species by using the manifold dimensionality reduction (MDR) method and the nonlinear regression method of least squares support vector machines (LS-SVM) combing with the mid-infrared spectroscopy (MIRS) techniques. The methods of Durbin and Run test of augmented partial residual plot (APaRP) were performed to diagnose the nonlinearity of the raw spectral data. The nonlinear MDR methods of isometric feature mapping (ISOMAP), local linear embedding, laplacian eigenmaps and local tangent space alignment, as well as the linear MDR methods of principle component analysis and metric multidimensional scaling were employed to extract the feature variables. The extracted characteristic variables were utilized as the input of LS-SVM and established the relationship between the spectra and the target attributes. The mean average precision (MAP) scores and prediction accuracy were respectively used to evaluate the performance of models. The prediction results showed that the ISOMAP-LS-SVM model obtained the best classification performance, where the MAP scores and prediction accuracy were 0.947 and 92.86%, respectively. It can be concluded that the ISOMAP-LS-SVM model combined with the MIRS technique has the potential of classifying the species of sorghum in a reasonable accuracy.

  7. Combination of the Manifold Dimensionality Reduction Methods with Least Squares Support vector machines for Classifying the Species of Sorghum Seeds

    PubMed Central

    Chen, Y. M.; Lin, P.; He, J. Q.; He, Y.; Li, X.L.

    2016-01-01

    This study was carried out for rapid and noninvasive determination of the class of sorghum species by using the manifold dimensionality reduction (MDR) method and the nonlinear regression method of least squares support vector machines (LS-SVM) combing with the mid-infrared spectroscopy (MIRS) techniques. The methods of Durbin and Run test of augmented partial residual plot (APaRP) were performed to diagnose the nonlinearity of the raw spectral data. The nonlinear MDR methods of isometric feature mapping (ISOMAP), local linear embedding, laplacian eigenmaps and local tangent space alignment, as well as the linear MDR methods of principle component analysis and metric multidimensional scaling were employed to extract the feature variables. The extracted characteristic variables were utilized as the input of LS-SVM and established the relationship between the spectra and the target attributes. The mean average precision (MAP) scores and prediction accuracy were respectively used to evaluate the performance of models. The prediction results showed that the ISOMAP-LS-SVM model obtained the best classification performance, where the MAP scores and prediction accuracy were 0.947 and 92.86%, respectively. It can be concluded that the ISOMAP-LS-SVM model combined with the MIRS technique has the potential of classifying the species of sorghum in a reasonable accuracy. PMID:26817580

  8. Resolution of five-component mixture using mean centering ratio and inverse least squares chemometrics

    PubMed Central

    2013-01-01

    Background A comparative study of the use of mean centering of ratio spectra and inverse least squares for the resolution of paracetamol, methylparaben, propylparaben, chlorpheniramine maleate and pseudoephedrine hydrochloride has been achieved showing that the two chemometric methods provide a good example of the high resolving power of these techniques. Method (I) is the mean centering of ratio spectra which depends on using the mean centered ratio spectra in four successive steps that eliminates the derivative steps and therefore the signal to noise ratio is improved. The absorption spectra of prepared solutions were measured in the range of 220–280 nm. Method (II) is based on the inverse least squares that depend on updating developed multivariate calibration model. The absorption spectra of the prepared mixtures in the range 230–270 nm were recorded. Results The linear concentration ranges were 0–25.6, 0–15.0, 0–15.0, 0–45.0 and 0–100.0 μg mL-1 for paracetamol, methylparaben, propylparaben, chlorpheniramine maleate and pseudoephedrine hydrochloride, respectively. The mean recoveries for simultaneous determination were between 99.9-101.3% for the two methods. The two developed methods have been successfully used for prediction of five-component mixture in Decamol Flu syrup with good selectivity, high sensitivity and extremely low detection limit. Conclusion No published method has been reported for simultaneous determination of the five components of this mixture so that the results of the mean centering of ratio spectra method were compared with those of the proposed inverse least squares method. Statistical comparison was performed using t-test and F-ratio at P = 0.05. There was no significant difference between the results. PMID:24028626

  9. Least-squares finite element solutions for three-dimensional backward-facing step flow

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Hou, Lin-Jun; Lin, Tsung-Liang

    1993-01-01

    Comprehensive numerical solutions of the steady state incompressible viscous flow over a three-dimensional backward-facing step up to Re equals 800 are presented. The results are obtained by the least-squares finite element method (LSFEM) which is based on the velocity-pressure-vorticity formulation. The computed model is of the same size as that of Armaly's experiment. Three-dimensional phenomena are observed even at low Reynolds number. The calculated values of the primary reattachment length are in good agreement with experimental results.

  10. A least-squares finite element method for incompressible Navier-Stokes problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan

    1992-01-01

    A least-squares finite element method, based on the velocity-pressure-vorticity formulation, is developed for solving steady incompressible Navier-Stokes problems. This method leads to a minimization problem rather than to a saddle-point problem by the classic mixed method and can thus accommodate equal-order interpolations. This method has no parameter to tune. The associated algebraic system is symmetric, and positive definite. Numerical results for the cavity flow at Reynolds number up to 10,000 and the backward-facing step flow at Reynolds number up to 900 are presented.

  11. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  12. In situ ply strength: An initial assessment. [using laminate fracture data and a least squares method

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Sullivan, T. L.

    1978-01-01

    The in situ ply strengths in several composites were calculated using laminate fracture data for appropriate low modulus, and high modulus fiber composites were used in conjunction with the least squares method. The laminate fracture data were obtained from tests on Modmor-I graphite/epoxy, AS-graphite/epoxy, boron/epoxy and E-glass/epoxy. The results show that the calculated in situ ply strengths can be considerably different from those measured in unidirectional composites, especially the transverse strengths and those in angleplied laminates with transply cracks.

  13. Small-kernel, constrained least-squares restoration of sampled image data

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Park, Stephen K.

    1992-01-01

    Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.

  14. Distribution of error in least-squares solution of an overdetermined system of linear simultaneous equations

    NASA Technical Reports Server (NTRS)

    Miller, C. D.

    1972-01-01

    Probability density functions were derived for errors in the evaluation of unknowns by the least squares method in system of nonhomogeneous linear equations. Coefficients of the unknowns were assumed correct and computational precision were also assumed. A vector space was used, with number of dimensions equal to the number of equations. An error vector was defined and assumed to have uniform distribution of orientation throughout the vector space. The density functions are shown to be insensitive to the biasing effects of the source of the system of equations.

  15. Simultaneous kinetic spectrophotometric determination of spironolactone and canrenone in urine using partial least-squares regression.

    PubMed

    Martín, E; Jiménez, A I; Hernández, O; Jiménez, F; Arias, J J

    1999-06-01

    A method is proposed for the simultaneous determination of spironolactone and canrenone in urine based on the different rates at which they react with sulphuric acid to yield a trienone. Kinetic spectrophotometric data are processed by partial least-squares (PLS) regression. The optimum sulphuric acid concentration and temperature are determined from response surfaces, using PLS methodology to relate both variables to the relative square error of prediction (RSEP, the parameter to be minimized). The relative errors made in the quantitation of each diuretic by the proposed method are less than 5% and the overall error, as RSEP, ranges from 1.06 and 1.44%. PMID:18967585

  16. A least-squares method for second order noncoercive elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Ku, Jaeun

    2007-03-01

    In this paper, we consider a least-squares method proposed by Bramble, Lazarov and Pasciak (1998) which can be thought of as a stabilized Galerkin method for noncoercive problems with unique solutions. We modify their method by weakening the strength of the stabilization terms and present various new error estimates. The modified method has all the desirable properties of the original method; indeed, we shall show some theoretical properties that are not known for the original method. At the same time, our numerical experiments show an improvement of the method due to the modification.

  17. Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem

    SciTech Connect

    Yoo, Jaechil

    1996-12-31

    Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.

  18. Least squares support vector machines for direction of arrival estimation with error control and validation.

    SciTech Connect

    Christodoulou, Christos George (University of New Mexico, Albuquerque, NM); Abdallah, Chaouki T. (University of New Mexico, Albuquerque, NM); Rohwer, Judd Andrew

    2003-02-01

    The paper presents a multiclass, multilabel implementation of least squares support vector machines (LS-SVM) for direction of arrival (DOA) estimation in a CDMA system. For any estimation or classification system, the algorithm's capabilities and performance must be evaluated. Specifically, for classification algorithms, a high confidence level must exist along with a technique to tag misclassifications automatically. The presented learning algorithm includes error control and validation steps for generating statistics on the multiclass evaluation path and the signal subspace dimension. The error statistics provide a confidence level for the classification accuracy.

  19. SPARSE REPRESENTATIONS WITH DATA FIDELITY TERM VIA AN ITERATIVELY REWEIGHTED LEAST SQUARES ALGORITHM

    SciTech Connect

    WOHLBERG, BRENDT; RODRIGUEZ, PAUL

    2007-01-08

    Basis Pursuit and Basis Pursuit Denoising, well established techniques for computing sparse representations, minimize an {ell}{sup 2} data fidelity term subject to an {ell}{sup 1} sparsity constraint or regularization term on the solution by mapping the problem to a linear or quadratic program. Basis Pursuit Denoising with an {ell}{sup 1} data fidelity term has recently been proposed, also implemented via a mapping to a linear program. They introduce an alternative approach via an iteratively Reweighted Least Squares algorithm, providing greater flexibility in the choice of data fidelity term norm, and computational advantages in certain circumstances.

  20. Review of the Palisades pressure vessel accumulated fluence estimate and of the least squares methodology employed

    SciTech Connect

    Griffin, P.J.

    1998-05-01

    This report provides a review of the Palisades submittal to the Nuclear Regulatory Commission requesting endorsement of their accumulated neutron fluence estimates based on a least squares adjustment methodology. This review highlights some minor issues in the applied methodology and provides some recommendations for future work. The overall conclusion is that the Palisades fluence estimation methodology provides a reasonable approach to a {open_quotes}best estimate{close_quotes} of the accumulated pressure vessel neutron fluence and is consistent with the state-of-the-art analysis as detailed in community consensus ASTM standards.

  1. 3-D foliation unfolding with volume and bed-length least-squares conservation

    SciTech Connect

    Leger, M.; Morvan, J.M.; Thibaut, M.

    1994-12-31

    Restoration of a geologic structure at earlier times is a good means to criticize, and next to improve, its interpretation. Restoration softwares already exist in 2D, but a lot of work remains to be done in 3D. The authors focus on the interbedding slip phenomenon, with bed-length and volume conservation. They unfold a (geometrical) foliation by optimizing following least-squares criteria: horizontalness, bed-length and volume conservation, under equality constraints related to the position of the ``binding`` or ``pin-surface``

  2. Improving precision of X-ray fluorescence analysis of lanthanide mixtures using partial least squares regression

    NASA Astrophysics Data System (ADS)

    Kirsanov, Dmitry; Panchuk, Vitaly; Goydenko, Alexander; Khaydukova, Maria; Semenov, Valentin; Legin, Andrey

    2015-11-01

    This study addresses the problem of simultaneous quantitative analysis of six lanthanides (Ce, Pr, Nd, Sm, Eu, Gd) in mixed solutions by two different X-ray fluorescence techniques: energy-dispersive (EDX) and total reflection (TXRF). Concentration of each lanthanide was varied in the range 10- 6-10- 3 mol/L, low values being around the detection limit of the method. This resulted in XRF spectra with very poor signal to noise ratio and overlapping bands in case of EDX, while only the latter problem was observed for TXRF. It was shown that ordinary least squares approach in numerical calibration fails to provide for reasonable precision in quantification of individual lanthanides. Partial least squares (PLS) regression was able to circumvent spectral inferiorities and yielded adequate calibration models for both techniques with RMSEP (root mean squared error of prediction) values around 10- 5 mol/L. It was demonstrated that comparatively simple and inexpensive EDX method is capable of ensuring the similar precision to more sophisticated TXRF, when the spectra are treated by PLS.

  3. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-01-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  4. Comparison of approaches for parameter estimation on stochastic models: Generic least squares versus specialized approaches.

    PubMed

    Zimmer, Christoph; Sahle, Sven

    2016-04-01

    Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences. PMID:26826353

  5. Power-law modeling based on least-squares minimization criteria.

    PubMed

    Hernández-Bermejo, B; Fairén, V; Sorribas, A

    1999-10-01

    The power-law formalism has been successfully used as a modeling tool in many applications. The resulting models, either as Generalized Mass Action or as S-systems models, allow one to characterize the target system and to simulate its dynamical behavior in response to external perturbations and parameter changes. The power-law formalism was first derived as a Taylor series approximation in logarithmic space for kinetic rate-laws. The especial characteristics of this approximation produce an extremely useful systemic representation that allows a complete system characterization. Furthermore, their parameters have a precise interpretation as local sensitivities of each of the individual processes and as rate-constants. This facilitates a qualitative discussion and a quantitative estimation of their possible values in relation to the kinetic properties. Following this interpretation, parameter estimation is also possible by relating the systemic behavior to the underlying processes. Without leaving the general formalism, in this paper we suggest deriving the power-law representation in an alternative way that uses least-squares minimization. The resulting power-law mimics the target rate-law in a wider range of concentration values than the classical power-law. Although the implications of this alternative approach remain to be established, our results show that the predicted steady-state using the least-squares power-law is closest to the actual steady-state of the target system.

  6. Online segmentation of time series based on polynomial least-squares approximations.

    PubMed

    Fuchs, Erich; Gruber, Thiemo; Nitschke, Jiri; Sick, Bernhard

    2010-12-01

    The paper presents SwiftSeg, a novel technique for online time series segmentation and piecewise polynomial representation. The segmentation approach is based on a least-squares approximation of time series in sliding and/or growing time windows utilizing a basis of orthogonal polynomials. This allows the definition of fast update steps for the approximating polynomial, where the computational effort depends only on the degree of the approximating polynomial and not on the length of the time window. The coefficients of the orthogonal expansion of the approximating polynomial-obtained by means of the update steps-can be interpreted as optimal (in the least-squares sense) estimators for average, slope, curvature, change of curvature, etc., of the signal in the time window considered. These coefficients, as well as the approximation error, may be used in a very intuitive way to define segmentation criteria. The properties of SwiftSeg are evaluated by means of some artificial and real benchmark time series. It is compared to three different offline and online techniques to assess its accuracy and runtime. It is shown that SwiftSeg-which is suitable for many data streaming applications-offers high accuracy at very low computational costs. PMID:20975120

  7. Determination of Protein Secondary Structure from Infrared Spectra Using Partial Least-Squares Regression.

    PubMed

    Wilcox, Kieaibi E; Blanch, Ewan W; Doig, Andrew J

    2016-07-12

    Infrared (IR) spectra contain substantial information about protein structure. This has previously most often been exploited by using known band assignments. Here, we convert spectral intensities in bins within Amide I and II regions to vectors and apply machine learning methods to determine protein secondary structure. Partial least squares was performed on spectra of 90 proteins in H2O. After preprocessing and removal of outliers, 84 proteins were used for this work. Standard normal variate and second-derivative preprocessing methods on the combined Amide I and II data generally gave the best performance, with root-mean-square values for prediction of ∼12% for α-helix, ∼7% for β-sheet, 7% for antiparallel β-sheet, and ∼8% for other conformations. Analysis of Fourier transform infrared (FTIR) spectra of 16 proteins in D2O showed that secondary structure determination was slightly poorer than in H2O. Interval partial least squares was used to identify the critical regions within spectra for secondary structure prediction and showed that the sides of bands were most valuable, rather than their peak maxima. In conclusion, we have shown that multivariate analysis of protein FTIR spectra can give α-helix, β-sheet, other, and antiparallel β-sheet contents with good accuracy, comparable to that of circular dichroism, which is widely used for this purpose. PMID:27322779

  8. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization

    PubMed Central

    Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L2 and L1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996

  9. Fast algorithm for solving the Hankel/Toeplitz Structured Total Least Squares problem

    NASA Astrophysics Data System (ADS)

    Lemmerling, Philippe; Mastronardi, Nicola; van Huffel, Sabine

    2000-07-01

    The Structured Total Least Squares (STLS) problem is a natural extension of the Total Least Squares (TLS) problem when constraints on the matrix structure need to be imposed. Similar to the ordinary TLS approach, the STLS approach can be used to determine the parameter vector of a linear model, given some noisy measurements. In many signal processing applications, the imposition of this matrix structure constraint is necessary for obtaining Maximum Likelihood (ML) estimates of the parameter vectorE In this paper we consider the Toeplitz (Hankel) STLS problem (i.e., an STLS problem in which the Toeplitz (Hankel) structure needs to be preserved). A fast implementation of an algorithm for solving this frequently occurring STLS problem is proposed. The increased efficiency is obtained by exploiting the low displacement rank of the involved matrices and the sparsity of the associated generators. The fast implementation is compared to two other implementations of algorithms for solving the Toeplitz (Hankel) STLS problem. The comparison is carried out on a recently proposed speech compression scheme. The numerical results confirm the high efficiency of the newly proposed fast implementation: the straightforward implementations have a complexity of O((m+n)3) and O(m3) whereas the proposed implementation has a complexity of O(mn+n2).

  10. Least-squares finite-element scheme for the lattice Boltzmann method on an unstructured mesh.

    PubMed

    Li, Yusong; LeBoeuf, Eugene J; Basu, P K

    2005-10-01

    A numerical model of the lattice Boltzmann method (LBM) utilizing least-squares finite-element method in space and the Crank-Nicolson method in time is developed. This method is able to solve fluid flow in domains that contain complex or irregular geometric boundaries by using the flexibility and numerical stability of a finite-element method, while employing accurate least-squares optimization. Fourth-order accuracy in space and second-order accuracy in time are derived for a pure advection equation on a uniform mesh; while high stability is implied from a von Neumann linearized stability analysis. Implemented on unstructured mesh through an innovative element-by-element approach, the proposed method requires fewer grid points and less memory compared to traditional LBM. Accurate numerical results are presented through two-dimensional incompressible Poiseuille flow, Couette flow, and flow past a circular cylinder. Finally, the proposed method is applied to estimate the permeability of a randomly generated porous media, which further demonstrates its inherent geometric flexibility. PMID:16383571

  11. Modeling individual HRTF tensor using high-order partial least squares

    NASA Astrophysics Data System (ADS)

    Huang, Qinghua; Li, Lin

    2014-12-01

    A tensor is used to describe head-related transfer functions (HRTFs) depending on frequencies, sound directions, and anthropometric parameters. It keeps the multi-dimensional structure of measured HRTFs. To construct a multi-linear HRTF personalization model, an individual core tensor is extracted from the original HRTFs using high-order singular value decomposition (HOSVD). The individual core tensor in lower-dimensional space acts as the output of the multi-linear model. Some key anthropometric parameters as the inputs of the model are selected by Laplacian scores and correlation analyses between all the measured parameters and the individual core tensor. Then, the multi-linear regression model is constructed by high-order partial least squares (HOPLS), aiming to seek a joint subspace approximation for both the selected parameters and the individual core tensor. The numbers of latent variables and loadings are used to control the complexity of the model and prevent overfitting feasibly. Compared with the partial least squares regression (PLSR) method, objective simulations demonstrate the better performance for predicting individual HRTFs especially for the sound directions ipsilateral to the concerned ear. The subjective listening tests show that the predicted individual HRTFs are approximate to the measured HRTFs for the sound localization.

  12. Online Least Squares One-Class Support Vector Machines-Based Abnormal Visual Event Detection

    PubMed Central

    Wang, Tian; Chen, Jie; Zhou, Yi; Snoussi, Hichem

    2013-01-01

    The abnormal event detection problem is an important subject in real-time video surveillance. In this paper, we propose a novel online one-class classification algorithm, online least squares one-class support vector machine (online LS-OC-SVM), combined with its sparsified version (sparse online LS-OC-SVM). LS-OC-SVM extracts a hyperplane as an optimal description of training objects in a regularized least squares sense. The online LS-OC-SVM learns a training set with a limited number of samples to provide a basic normal model, then updates the model through remaining data. In the sparse online scheme, the model complexity is controlled by the coherence criterion. The online LS-OC-SVM is adopted to handle the abnormal event detection problem. Each frame of the video is characterized by the covariance matrix descriptor encoding the moving information, then is classified into a normal or an abnormal frame. Experiments are conducted, on a two-dimensional synthetic distribution dataset and a benchmark video surveillance dataset, to demonstrate the promising results of the proposed online LS-OC-SVM method. PMID:24351629

  13. Online least squares one-class support vector machines-based abnormal visual event detection.

    PubMed

    Wang, Tian; Chen, Jie; Zhou, Yi; Snoussi, Hichem

    2013-12-12

    The abnormal event detection problem is an important subject in real-time video surveillance. In this paper, we propose a novel online one-class classification algorithm, online least squares one-class support vector machine (online LS-OC-SVM), combined with its sparsified version (sparse online LS-OC-SVM). LS-OC-SVM extracts a hyperplane as an optimal description of training objects in a regularized least squares sense. The online LS-OC-SVM learns a training set with a limited number of samples to provide a basic normal model, then updates the model through remaining data. In the sparse online scheme, the model complexity is controlled by the coherence criterion. The online LS-OC-SVM is adopted to handle the abnormal event detection problem. Each frame of the video is characterized by the covariance matrix descriptor encoding the moving information, then is classified into a normal or an abnormal frame. Experiments are conducted, on a two-dimensional synthetic distribution dataset and a benchmark video surveillance dataset, to demonstrate the promising results of the proposed online LS-OC-SVM method.

  14. Weighted least-squares phase unwrapping algorithm based on derivative variance correlation map

    NASA Astrophysics Data System (ADS)

    Lu, Yuangang; Wang, Xiangzhao; Zhang, Xuping

    2007-02-01

    Among different phase unwrapping approaches, the weighted least-squares minimization methods are gaining attention. In these algorithms, weighting coefficient is generated from a quality map. The intrinsic drawbacks of existing quality maps constrain the application of these algorithms. They often fail to handle wrapped phase data contains error sources, such as phase discontinuities, noise and undersampling. In order to deal with those intractable wrapped phase data, a new weighted least-squares phase unwrapping algorithm based on derivative variance correlation map is proposed. In the algorithm, derivative variance correlation map, a novel quality map, can truly reflect wrapped phase quality, ensuring a more reliable unwrapped result. The definition of the derivative variance correlation map and the principle of the proposed algorithm are present in detail. The performance of the new algorithm has been tested by use of a simulated spherical surface wrapped data and an experimental interferometric synthetic aperture radar (IFSAR) wrapped data. Computer simulation and experimental results have verified that the proposed algorithm can work effectively even when a wrapped phase map contains intractable error sources.

  15. Non-negative least-squares variance component estimation with application to GPS time series

    NASA Astrophysics Data System (ADS)

    Amiri-Simkooei, A. R.

    2016-05-01

    The problem of negative variance components is probable to occur in many geodetic applications. This problem can be avoided if non-negativity constraints on variance components (VCs) are introduced to the stochastic model. Based on the standard non-negative least-squares (NNLS) theory, this contribution presents the method of non-negative least-squares variance component estimation (NNLS-VCE). The method is easy to understand, simple to implement, and efficient in practice. The NNLS-VCE is then applied to the coordinate time series of the permanent GPS stations to simultaneously estimate the amplitudes of different noise components such as white noise, flicker noise, and random walk noise. If a noise model is unlikely to be present, its amplitude is automatically estimated to be zero. The results obtained from 350 GPS permanent stations indicate that the noise characteristics of the GPS time series are well described by combination of white noise and flicker noise. This indicates that all time series contain positive noise amplitudes for white and flicker noise. In addition, around two-thirds of the series consist of random walk noise, of which its average amplitude is the (small) value of 0.16, 0.13, and 0.45 { mm/year }^{1/2} for the north, east, and up components, respectively. Also, about half of the positive estimated amplitudes of random walk noise are statistically significant, indicating that one-third of the total time series have significant random walk noise.

  16. A fast least-squares arrival time estimator for scintillation pulses

    SciTech Connect

    Petrick, N.; Hero, A.O. III; Clinthorne, N.H.; Rogers, W.L. )

    1994-08-01

    The true weighted least-squares (WLS) arrival time estimator for scintillation pulse detection was previously found to out-perform conventional arrival time estimators such as leading-edge and constant-fraction timers, but has limited applications because of its complexity. A new diagonalized version of the weighted least-squares (DWLS) estimator has been developed which, like the true WLS, incorporates the statistical properties of the scintillation detector. The new DWLS reduces estimator complexity at the expense of fundamental timing resolution. The advantage of the DWLS implementation is that only scalar multiplications and additions are needed instead of the matrix operations used in the true WLS. It also preserves the true WLS's ability to effectively separate piled-up pulses. The DWLS estimator has been applied to pulses which approximate the response of BGO and NaI(Tl) scintillation detectors. The timing resolution obtained with the DWLS estimator is then compared to conventional analog timers along with the Cramer-Rao lower bound on achievable timing error. The DWLS out-performs the conventional arrival time estimators but does not provide optimal performance compared to the lower bound; however, it is more robust than the true WLS estimator.

  17. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization.

    PubMed

    Zhang, Chunyuan; Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996

  18. Simultaneous spectrophotometric determination of four metals by two kinds of partial least squares methods

    NASA Astrophysics Data System (ADS)

    Gao, Ling; Ren, Shouxin

    2005-10-01

    Simultaneous determination of Ni(II), Cd(II), Cu(II) and Zn(II) was studied by two methods, kernel partial least squares (KPLS) and wavelet packet transform partial least squares (WPTPLS), with xylenol orange and cetyltrimethyl ammonium bromide as reagents in the medium pH = 9.22 borax-hydrochloric acid buffer solution. Two programs, PKPLS and PWPTPLS, were designed to perform the calculations. Data reduction was performed using kernel matrices and wavelet packet transform, respectively. In the KPLS method, the size of the kernel matrix is only dependent on the number of samples, thus the method was suitable for the data matrix with many wavelengths and fewer samples. Wavelet packet representations of signals provide a local time-frequency description, thus in the wavelet packet domain, the quality of the noise removal can be improved. In the WPTPLS by optimization, wavelet function and decomposition level were selected as Daubeches 12 and 5, respectively. Experimental results showed both methods to be successful even where there was severe overlap of spectra.

  19. Online least squares one-class support vector machines-based abnormal visual event detection.

    PubMed

    Wang, Tian; Chen, Jie; Zhou, Yi; Snoussi, Hichem

    2013-01-01

    The abnormal event detection problem is an important subject in real-time video surveillance. In this paper, we propose a novel online one-class classification algorithm, online least squares one-class support vector machine (online LS-OC-SVM), combined with its sparsified version (sparse online LS-OC-SVM). LS-OC-SVM extracts a hyperplane as an optimal description of training objects in a regularized least squares sense. The online LS-OC-SVM learns a training set with a limited number of samples to provide a basic normal model, then updates the model through remaining data. In the sparse online scheme, the model complexity is controlled by the coherence criterion. The online LS-OC-SVM is adopted to handle the abnormal event detection problem. Each frame of the video is characterized by the covariance matrix descriptor encoding the moving information, then is classified into a normal or an abnormal frame. Experiments are conducted, on a two-dimensional synthetic distribution dataset and a benchmark video surveillance dataset, to demonstrate the promising results of the proposed online LS-OC-SVM method. PMID:24351629

  20. Weighted Least Squares Estimates of the Magnetotelluric Transfer Functions from Nonstationary Data

    SciTech Connect

    Stodt, John A.

    1982-11-01

    Magnetotelluric field measurements can generally be viewed as sums of signal and additive random noise components. The standard unweighted least squares estimates of the impedance and tipper functions which are usually calculated from noisy data are not optimal when the measured fields are nonstationary. The nonstationary behavior of the signals and noises should be exploited by weighting the data appropriately to reduce errors in the estimates of the impedances and tippers. Insight into the effects of noise on the estimates is gained by careful development of a statistical model, within a linear system framework, which allows for nonstationary behavior of both the signal and noise components of the measured fields. The signal components are, by definition, linearly related to each other by the impedance and tipper functions. It is therefore appropriate to treat them as deterministic parameters, rather than as random variables, when analyzing the effects of noise on the calculated impedances and tippers. From this viewpoint, weighted least squares procedures are developed to reduce the errors in impedances and tippers which are calculated from nonstationary data.

  1. Interval analysis approach to rank determination in linear least squares problems

    SciTech Connect

    Manteuffel, T.A.

    1980-06-01

    The linear least-squares problem Ax approx. = b has a unique solution only if the matrix A has full column rank. Numerical rank determination is difficult, especially in the presence of uncertainties in the elements of A. This paper proposes an interval analysis approach. A set of matrices A/sup I/ is defined that contains all possible perturbations of A due to uncertainties; A/sup I/ is said to be rank deficient if any member of A/sup I/ is rank deficient. A modification to the Q-R decomposition method of solution of the least-squares problem allows a determination of the rank of A/sup I/ and a partial interval analysis of the solution vector x. This procedure requires the computation of R/sup -1/. Another modification is proposed which determines the rank of A/sup I/ without computing R/sup -1/. The additional computational effort is O(N/sup 2/), where N is the column dimension of A. 4 figures.

  2. [Biomass Compositional Analysis Using Sparse Partial Least Squares Regression and Near Infrared Spectrum Technique].

    PubMed

    Yao, Yan; Wang, Chang-yue; Liu, Hui-jun; Tang, Jian-bin; Cai, Jin-hui; Wang, Jing-jun

    2015-07-01

    Forest bio-fuel, a new type renewable energy, has attracted increasing attention as a promising alternative. In this study, a new method called Sparse Partial Least Squares Regression (SPLS) is used to construct the proximate analysis model to analyze the fuel characteristics of sawdust combining Near Infrared Spectrum Technique. Moisture, Ash, Volatile and Fixed Carbon percentage of 80 samples have been measured by traditional proximate analysis. Spectroscopic data were collected by Nicolet NIR spectrometer. After being filtered by wavelet transform, all of the samples are divided into training set and validation set according to sample category and producing area. SPLS, Principle Component Regression (PCR), Partial Least Squares Regression (PLS) and Least Absolute Shrinkage and Selection Operator (LASSO) are presented to construct prediction model. The result advocated that SPLS can select grouped wavelengths and improve the prediction performance. The absorption peaks of the Moisture is covered in the selected wavelengths, well other compositions have not been confirmed yet. In a word, SPLS can reduce the dimensionality of complex data sets and interpret the relationship between spectroscopic data and composition concentration, which will play an increasingly important role in the field of NIR application. PMID:26717741

  3. Comparative artificial neural network and partial least squares models for analysis of Metronidazole, Diloxanide, Spiramycin and Cliquinol in pharmaceutical preparations

    NASA Astrophysics Data System (ADS)

    Elkhoudary, Mahmoud M.; Abdel Salam, Randa A.; Hadad, Ghada M.

    2014-09-01

    Metronidazole (MNZ) is a widely used antibacterial and amoebicide drug. Therefore, it is important to develop a rapid and specific analytical method for the determination of MNZ in mixture with Spiramycin (SPY), Diloxanide (DIX) and Cliquinol (CLQ) in pharmaceutical preparations. This work describes simple, sensitive and reliable six multivariate calibration methods, namely linear and nonlinear artificial neural networks preceded by genetic algorithm (GA-ANN) and principle component analysis (PCA-ANN) as well as partial least squares (PLS) either alone or preceded by genetic algorithm (GA-PLS) for UV spectrophotometric determination of MNZ, SPY, DIX and CLQ in pharmaceutical preparations with no interference of pharmaceutical additives. The results manifest the problem of nonlinearity and how models like ANN can handle it. Analytical performance of these methods was statistically validated with respect to linearity, accuracy, precision and specificity. The developed methods indicate the ability of the previously mentioned multivariate calibration models to handle and solve UV spectra of the four components’ mixtures using easy and widely used UV spectrophotometer.

  4. Comparative artificial neural network and partial least squares models for analysis of Metronidazole, Diloxanide, Spiramycin and Cliquinol in pharmaceutical preparations.

    PubMed

    Elkhoudary, Mahmoud M; Abdel Salam, Randa A; Hadad, Ghada M

    2014-09-15

    Metronidazole (MNZ) is a widely used antibacterial and amoebicide drug. Therefore, it is important to develop a rapid and specific analytical method for the determination of MNZ in mixture with Spiramycin (SPY), Diloxanide (DIX) and Cliquinol (CLQ) in pharmaceutical preparations. This work describes simple, sensitive and reliable six multivariate calibration methods, namely linear and nonlinear artificial neural networks preceded by genetic algorithm (GA-ANN) and principle component analysis (PCA-ANN) as well as partial least squares (PLS) either alone or preceded by genetic algorithm (GA-PLS) for UV spectrophotometric determination of MNZ, SPY, DIX and CLQ in pharmaceutical preparations with no interference of pharmaceutical additives. The results manifest the problem of nonlinearity and how models like ANN can handle it. Analytical performance of these methods was statistically validated with respect to linearity, accuracy, precision and specificity. The developed methods indicate the ability of the previously mentioned multivariate calibration models to handle and solve UV spectra of the four components' mixtures using easy and widely used UV spectrophotometer.

  5. Radio astronomical image formation using constrained least squares and Krylov subspaces

    NASA Astrophysics Data System (ADS)

    Mouri Sardarabadi, Ahmad; Leshem, Amir; van der Veen, Alle-Jan

    2016-04-01

    Aims: Image formation for radio astronomy can be defined as estimating the spatial intensity distribution of celestial sources throughout the sky, given an array of antennas. One of the challenges with image formation is that the problem becomes ill-posed as the number of pixels becomes large. The introduction of constraints that incorporate a priori knowledge is crucial. Methods: In this paper we show that in addition to non-negativity, the magnitude of each pixel in an image is also bounded from above. Indeed, the classical "dirty image" is an upper bound, but a much tighter upper bound can be formed from the data using array processing techniques. This formulates image formation as a least squares optimization problem with inequality constraints. We propose to solve this constrained least squares problem using active set techniques, and the steps needed to implement it are described. It is shown that the least squares part of the problem can be efficiently implemented with Krylov-subspace-based techniques. We also propose a method for correcting for the possible mismatch between source positions and the pixel grid. This correction improves both the detection of sources and their estimated intensities. The performance of these algorithms is evaluated using simulations. Results: Based on parametric modeling of the astronomical data, a new imaging algorithm based on convex optimization, active sets, and Krylov-subspace-based solvers is presented. The relation between the proposed algorithm and sequential source removing techniques is explained, and it gives a better mathematical framework for analyzing existing algorithms. We show that by using the structure of the algorithm, an efficient implementation that allows massive parallelism and storage reduction is feasible. Simulations are used to compare the new algorithm to classical CLEAN. Results illustrate that for a discrete point model, the proposed algorithm is capable of detecting the correct number of sources

  6. Differential sampling for fast frequency acquisition via adaptive extended least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra

    1987-01-01

    This paper presents a differential signal model along with appropriate sampling techinques for least squares estimation of the frequency and frequency derivatives and possibly the phase and amplitude of a sinusoid received in the presence of noise. The proposed algorithm is recursive in mesurements and thus the computational requirement increases only linearly with the number of measurements. The dimension of the state vector in the proposed algorithm does not depend upon the number of measurements and is quite small, typically around four. This is an advantage when compared to previous algorithms wherein the dimension of the state vector increases monotonically with the product of the frequency uncertainty and the observation period. Such a computational simplification may possibly result in some loss of optimality. However, by applying the sampling techniques of the paper such a possible loss in optimality can made small.

  7. Symbolic Givens reduction and row-ordering in large sparse least squares problems

    SciTech Connect

    Ostrouchov, G.

    1987-05-01

    In the solution of large sparse least squares problems by Givens factorization, a preliminary symbolic step that determines a good processing order and a data structure for the matrix factor is used. In this paper, it is shown that a processing order equivalent to sequential processing by rows can be as good as any processing order using a single pivot row in each column. A notion of local acceptability in row-ordering is introduced and shown to reduce fill globally. Row-orderings satisfying this notion are essentially equivalent to sequential processing by rows and the nature of intermediate fill produced makes implicit representation of fill possible. This forms the basis for a symbolic Givens reduction algorithm that operates in a fixed data structure.

  8. Evaluation of TDRSS-user orbit determination accuracy using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Hodjatzadeh, M.; Samii, M. V.; Doll, C. E.; Hart, R. C.; Mistretta, G. D.

    1991-01-01

    The development of the Real-Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination on a Disk Operating System (DOS) based Personal Computer (PC) is addressed. The results of a study to compare the orbit determination accuracy of a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOD/E with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), is addressed. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for the Earth Radiation Budget Satellite (ERBS); the maximum solution differences were less than 25 m after the filter had reached steady state.

  9. [Relationships between Dendrobium quality and ecological factors based on partial least square regression].

    PubMed

    Li, Wen-Tao; Huang, Lin-Fang; Du, Jing; Chen, Shi-Lin

    2013-10-01

    A total of eleven ecological factors values were obtained from the ecological suitability database of the geographic information system for traditional Chinese medicines production areas (TCM-GIS), and the relationships between the chemical components of Dendrobium and the ecological factors were analyzed by partial least square (PLS) regression. There existed significant differences in the chemical components contents of the same species of Dendrobium in different areas. The polysaccharides content of D. officinale had significant positive correlation with soil type, the accumulated dendrobine in D. nobile was significantly positively correlated with annual precipitation, and the erianin content of D. chrysotoxum was mainly affected by air temperature. The principal component analysis (PCA) showed that Zhejiang Province was the optimal production area for D. officinale, Guizhou Province was the most appropriate planting area for D. nobile, and Yunnan Province was the best production area of D. chrysotoxum.

  10. Comparison of SIRT and SQS for Regularized Weighted Least Squares Image Reconstruction

    PubMed Central

    Gregor, Jens; Fessler, Jeffrey A.

    2015-01-01

    Tomographic image reconstruction is often formulated as a regularized weighted least squares (RWLS) problem optimized by iterative algorithms that are either inherently algebraic or derived from a statistical point of view. This paper compares a modified version of SIRT (Simultaneous Iterative Reconstruction Technique), which is of the former type, with a version of SQS (Separable Quadratic Surrogates), which is of the latter type. We show that the two algorithms minimize the same criterion function using similar forms of preconditioned gradient descent. We present near-optimal relaxation for both based on eigenvalue bounds and include a heuristic extension for use with ordered subsets. We provide empirical evidence that SIRT and SQS converge at the same rate for all intents and purposes. For context, we compare their performance with an implementation of preconditioned conjugate gradient. The illustrative application is X-ray CT of luggage for aviation security. PMID:26478906

  11. a Robust Pct Method Based on Complex Least Squares Adjustment Method

    NASA Astrophysics Data System (ADS)

    Haiqiang, F.; Jianjun, Z.; Changcheng, W.; Qinghua, X.; Rong, Z.

    2013-07-01

    Polarization Coherence Tomography (PCT) method has the good performance in deriving the vegetation vertical structure. However, Errors caused by temporal decorrelation and vegetation height and ground phase always propagate to the data analysis and contaminate the results. In order to overcome this disadvantage, we exploit Complex Least Squares Adjustment Method to compute vegetation height and ground phase based on Random Volume over Ground and Volume Temporal Decorrelation (RVoG + VTD) model. By the fusion of different polarimetric InSAR data, we can use more observations to obtain more robust estimations of temporal decorrelation and vegetation height, and then, we introduce them into PCT to acquire more accurate vegetation vertical structure. Finally the new approach is validated on E-SAR data of Oberpfaffenhofen, Germany. The results demonstrate that the robust method can greatly improve accusation of vegetation vertical structure.

  12. Slip distribution of the 2010 Mentawai earthquake from GPS observation using least squares inversion method

    NASA Astrophysics Data System (ADS)

    Awaluddin, Moehammad; Yuwono, Bambang Darmo; Puspita, Yolanda Adya

    2016-05-01

    Continuous Global Positioning System (GPS) observations showed significant crustal displacements as a result of the 2010 Mentawai earthquake. The Least Square Inversion method of Mentawai earthquake slip distribution from SuGAR observations yielded in an optimum value of slip distribution by giving a weight of smoothing constraint and a weight of slip value constraint = 0 at the edge of the earthquake rupture area. A maximum coseismic slip of the inversion calculation was 1.997 m and concentrated around stations PRKB (Pagai Island). In addition, the values of dip-slip direction tend to be more dominant. The seismic moment calculated from the slip distribution was 6.89 × 10E+20 Nm, which is equivalent to a magnitude of 7.8.

  13. Least squares parameter estimation methods for material decomposition with energy discriminating detectors

    SciTech Connect

    Le, Huy Q.; Molloi, Sabee

    2011-01-15

    Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar to the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg/ml) and iodine (4, 12, 20, 28, 36, and 44 mg/ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30/70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg/ml) and iodine (5, 15, 25, 35, and 45 mg/ml). The x-ray transport process was simulated where the Beer-Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues

  14. Note: Multivariate system spectroscopic model using Lorentz oscillators and partial least squares regression analysis.

    PubMed

    Gad, R S; Parab, J S; Naik, G M

    2010-11-01

    Multivariate system spectroscopic model plays important role in understanding chemometrics of ensemble under study. Here in this manuscript we discuss various approaches of modeling of spectroscopic system and demonstrate how Lorentz oscillator can be used to model any general spectroscopic system. Chemometric studies require customized templates design for the corresponding variants participating in ensemble, which generates the characteristic matrix of the ensemble under study. The typical biological system that resembles human blood tissue consisting of five major constituents i.e., alanine, urea, lactate, glucose, ascorbate; has been tested on the model. The model was validated using three approaches, namely, root mean square error (RMSE) analysis in the range of ±5% confidence interval, clerk gird error plot, and RMSE versus percent noise level study. Also the model was tested across various template sizes (consisting of samples ranging from 10 up to 1000) to ascertain the validity of partial least squares regression. The model has potential in understanding the chemometrics of proteomics pathways.

  15. Note: Multivariate system spectroscopic model using Lorentz oscillators and partial least squares regression analysis

    NASA Astrophysics Data System (ADS)

    Gad, R. S.; Parab, J. S.; Naik, G. M.

    2010-11-01

    Multivariate system spectroscopic model plays important role in understanding chemometrics of ensemble under study. Here in this manuscript we discuss various approaches of modeling of spectroscopic system and demonstrate how Lorentz oscillator can be used to model any general spectroscopic system. Chemometric studies require customized templates design for the corresponding variants participating in ensemble, which generates the characteristic matrix of the ensemble under study. The typical biological system that resembles human blood tissue consisting of five major constituents i.e., alanine, urea, lactate, glucose, ascorbate; has been tested on the model. The model was validated using three approaches, namely, root mean square error (RMSE) analysis in the range of ±5% confidence interval, clerk gird error plot, and RMSE versus percent noise level study. Also the model was tested across various template sizes (consisting of samples ranging from 10 up to 1000) to ascertain the validity of partial least squares regression. The model has potential in understanding the chemometrics of proteomics pathways.

  16. A least-squares finite element method for 3D incompressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, T. L.; Hou, Lin-Jun; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations, and results in symmetric, positive definite algebraic system. An additional compatibility equation, i.e., the divergence of vorticity vector should be zero, is included to make the first-order system elliptic. The Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. The flow in a half of 3D cubic cavity is calculated at Re = 100, 400, and 1,000 with 50 x 52 x 25 trilinear elements. The Taylor-Gortler-like vortices are observed at Re = 1,000.

  17. A Least-Squares Finite Element Method for Electromagnetic Scattering Problems

    NASA Technical Reports Server (NTRS)

    Wu, Jie; Jiang, Bo-nan

    1996-01-01

    The least-squares finite element method (LSFEM) is applied to electromagnetic scattering and radar cross section (RCS) calculations. In contrast to most existing numerical approaches, in which divergence-free constraints are omitted, the LSFF-M directly incorporates two divergence equations in the discretization process. The importance of including the divergence equations is demonstrated by showing that otherwise spurious solutions with large divergence occur near the scatterers. The LSFEM is based on unstructured grids and possesses full flexibility in handling complex geometry and local refinement Moreover, the LSFEM does not require any special handling, such as upwinding, staggered grids, artificial dissipation, flux-differencing, etc. Implicit time discretization is used and the scheme is unconditionally stable. By using a matrix-free iterative method, the computational cost and memory requirement for the present scheme is competitive with other approaches. The accuracy of the LSFEM is verified by several benchmark test problems.

  18. Underwater terrain positioning method based on least squares estimation for AUV

    NASA Astrophysics Data System (ADS)

    Chen, Peng-yun; Li, Ye; Su, Yu-min; Chen, Xiao-long; Jiang, Yan-qing

    2015-12-01

    To achieve accurate positioning of autonomous underwater vehicles, an appropriate underwater terrain database storage format for underwater terrain-matching positioning is established using multi-beam data as underwater terrainmatching data. An underwater terrain interpolation error compensation method based on fractional Brownian motion is proposed for defects of normal terrain interpolation, and an underwater terrain-matching positioning method based on least squares estimation (LSE) is proposed for correlation analysis of topographic features. The Fisher method is introduced as a secondary criterion for pseudo localization appearing in a topographic features flat area, effectively reducing the impact of pseudo positioning points on matching accuracy and improving the positioning accuracy of terrain flat areas. Simulation experiments based on electronic chart and multi-beam sea trial data show that drift errors of an inertial navigation system can be corrected effectively using the proposed method. The positioning accuracy and practicality are high, satisfying the requirement of underwater accurate positioning.

  19. Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model

    NASA Astrophysics Data System (ADS)

    Yu, Lean; Wang, Shouyang; Lai, K. K.

    Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.

  20. Concerning an application of the method of least squares with a variable weight matrix

    NASA Technical Reports Server (NTRS)

    Sukhanov, A. A.

    1979-01-01

    An estimate of a state vector for a physical system when the weight matrix in the method of least squares is a function of this vector is considered. An iterative procedure is proposed for calculating the desired estimate. Conditions for the existence and uniqueness of the limit of this procedure are obtained, and a domain is found which contains the limit estimate. A second method for calculating the desired estimate which reduces to the solution of a system of algebraic equations is proposed. The question of applying Newton's method of tangents to solving the given system of algebraic equations is considered and conditions for the convergence of the modified Newton's method are obtained. Certain properties of the estimate obtained are presented together with an example.

  1. Obtaining the wavefront phase maps of free form surfaces: using the least squares algorithm

    NASA Astrophysics Data System (ADS)

    Villalobos-Mendoza, B.; Aguirre-Aguirre, D.; Granados-Agustín, F.; Cornejo-Rodríguez, A.

    2015-01-01

    In this work is presented the validation of the least squares algorithm proposed by Morgan (1982) and Greivenkamp (1984), to obtain the wavefront phase maps of a free form surface. The validation was made by simulating the synthetic interferograms of a free form surface using a Bessel function, each interferogram was simulated with a phase-shifting of π/20. This algorithm is applied to the experimental interferograms that are obtained in a Twyman-Green interferometer where the phase shifting is achieved by using a SLM (Spatial Light Modulator) that is placed in one of its arms; the phase shifts are achieved by displaying all the gray levels from 0 to 255 in the SLM. The phase shifts that are performed in this experimental setup are lower than π/4, therefore the conventional algorithms cannot be applied.

  2. First-Order System Least Squares for the Stokes Equations, with Application to Linear Elasticity

    NASA Technical Reports Server (NTRS)

    Cai, Z.; Manteuffel, T. A.; McCormick, S. F.

    1996-01-01

    Following our earlier work on general second-order scalar equations, here we develop a least-squares functional for the two- and three-dimensional Stokes equations, generalized slightly by allowing a pressure term in the continuity equation. By introducing a velocity flux variable and associated curl and trace equations, we are able to establish ellipticity in an H(exp 1) product norm appropriately weighted by the Reynolds number. This immediately yields optimal discretization error estimates for finite element spaces in this norm and optimal algebraic convergence estimates for multiplicative and additive multigrid methods applied to the resulting discrete systems. Both estimates are uniform in the Reynolds number. Moreover, our pressure-perturbed form of the generalized Stokes equations allows us to develop an analogous result for the Dirichlet problem for linear elasticity with estimates that are uniform in the Lame constants.

  3. Rapid and accurate determination of tissue optical properties using least-squares support vector machines

    PubMed Central

    Barman, Ishan; Dingari, Narahara Chari; Rajaram, Narasimhan; Tunnell, James W.; Dasari, Ramachandra R.; Feld, Michael S.

    2011-01-01

    Diffuse reflectance spectroscopy (DRS) has been extensively applied for the characterization of biological tissue, especially for dysplasia and cancer detection, by determination of the tissue optical properties. A major challenge in performing routine clinical diagnosis lies in the extraction of the relevant parameters, especially at high absorption levels typically observed in cancerous tissue. Here, we present a new least-squares support vector machine (LS-SVM) based regression algorithm for rapid and accurate determination of the absorption and scattering properties. Using physical tissue models, we demonstrate that the proposed method can be implemented more than two orders of magnitude faster than the state-of-the-art approaches while providing better prediction accuracy. Our results show that the proposed regression method has great potential for clinical applications including in tissue scanners for cancer margin assessment, where rapid quantification of optical properties is critical to the performance. PMID:21412464

  4. Rapid and accurate determination of tissue optical properties using least-squares support vector machines.

    PubMed

    Barman, Ishan; Dingari, Narahara Chari; Rajaram, Narasimhan; Tunnell, James W; Dasari, Ramachandra R; Feld, Michael S

    2011-01-01

    Diffuse reflectance spectroscopy (DRS) has been extensively applied for the characterization of biological tissue, especially for dysplasia and cancer detection, by determination of the tissue optical properties. A major challenge in performing routine clinical diagnosis lies in the extraction of the relevant parameters, especially at high absorption levels typically observed in cancerous tissue. Here, we present a new least-squares support vector machine (LS-SVM) based regression algorithm for rapid and accurate determination of the absorption and scattering properties. Using physical tissue models, we demonstrate that the proposed method can be implemented more than two orders of magnitude faster than the state-of-the-art approaches while providing better prediction accuracy. Our results show that the proposed regression method has great potential for clinical applications including in tissue scanners for cancer margin assessment, where rapid quantification of optical properties is critical to the performance. PMID:21412464

  5. Online Soft Sensor of Humidity in PEM Fuel Cell Based on Dynamic Partial Least Squares

    PubMed Central

    Long, Rong; Chen, Qihong; Zhang, Liyan; Ma, Longhua; Quan, Shuhai

    2013-01-01

    Online monitoring humidity in the proton exchange membrane (PEM) fuel cell is an important issue in maintaining proper membrane humidity. The cost and size of existing sensors for monitoring humidity are prohibitive for online measurements. Online prediction of humidity using readily available measured data would be beneficial to water management. In this paper, a novel soft sensor method based on dynamic partial least squares (DPLS) regression is proposed and applied to humidity prediction in PEM fuel cell. In order to obtain data of humidity and test the feasibility of the proposed DPLS-based soft sensor a hardware-in-the-loop (HIL) test system is constructed. The time lag of the DPLS-based soft sensor is selected as 30 by comparing the root-mean-square error in different time lag. The performance of the proposed DPLS-based soft sensor is demonstrated by experimental results. PMID:24453923

  6. Improvement of high-order least-squares integration method for stereo deflectometry.

    PubMed

    Ren, Hongyu; Gao, Feng; Jiang, Xiangqian

    2015-12-01

    Stereo deflectometry is defined as measurement of the local slope of specular surfaces by using two CCD cameras as detectors and one LCD screen as a light source. For obtaining 3D topography, integrating the calculated slope data is needed. Currently, a high-order finite-difference-based least-squares integration (HFLI) method is used to improve the integration accuracy. However, this method cannot be easily implemented in circular domain or when gradient data are incomplete. This paper proposes a modified easy-implementation integration method based on HFLI (EI-HFLI), which can work in arbitrary domains, and can directly and conveniently handle incomplete gradient data. To carry out the proposed algorithm in a practical stereo deflectometry measurement, gradients are calculated in both CCD frames, and then are mixed together as original data to be meshed into rectangular grids format. Simulation and experiments show this modified method is feasible and can work efficiently. PMID:26836684

  7. Simultaneous evaluation of interrelated cross sections by generalized least-squares and related data file requirements

    SciTech Connect

    Poenitz, W.P.

    1984-10-25

    Though several cross sections have been designated as standards, they are not basic units and are interrelated by ratio measurements. Moreover, as such interactions as /sup 6/Li + n and /sup 10/B + n involve only two and three cross sections respectively, total cross section data become useful for the evaluation process. The problem can be resolved by a simultaneous evaluation of the available absolute and shape data for cross sections, ratios, sums, and average cross sections by generalized least-squares. A data file is required for such evaluation which contains the originally measured quantities and their uncertainty components. Establishing such a file is a substantial task because data were frequently reported as absolute cross sections where ratios were measured without sufficient information on which reference cross section and which normalization were utilized. Reporting of uncertainties is often missing or incomplete. The requirements for data reporting will be discussed.

  8. Multivariate analysis of remote LIBS spectra using partial least squares, principal component analysis, and related techniques

    SciTech Connect

    Clegg, Samuel M; Barefield, James E; Wiens, Roger C; Sklute, Elizabeth; Dyare, Melinda D

    2008-01-01

    Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.

  9. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    NASA Technical Reports Server (NTRS)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  10. The partial least-squares regression analysis of impact factors of coordinate measuring machine dynamic error

    NASA Astrophysics Data System (ADS)

    Zhang, Mei; Fei, Yetai; Sheng, Li; Ma, Xiushui; Yang, Hong-tao

    2008-12-01

    The reasons why the coordinate measuring machine (CMM) dynamic error exists are complicate. And there are many elements which influence the error. So it is hard to build an accurate model. For the sake of attaining a model which not only avoided analyzing complex error sources and the interactions among them, but also solved the multiple colinearity among the variables. This paper adopted the Partial Least-Squares Regression (PLSR) to build model. The model takes 3D coordinates (X, Y, Z) and the moving velocity as the independent variable and takes the CMM dynamic error value as the dependent variable. The experimental results show that the model can be easily explained. At the same time the results show the magnitude and direction of the independent variable influencing the dependent variable.

  11. Algorithm 937: MINRES-QLP for Symmetric and Hermitian Linear Equations and Least-Squares Problems.

    PubMed

    Choi, Sou-Cheng T; Saunders, Michael A

    2014-02-01

    We describe algorithm MINRES-QLP and its FORTRAN 90 implementation for solving symmetric or Hermitian linear systems or least-squares problems. If the system is singular, MINRES-QLP computes the unique minimum-length solution (also known as the pseudoinverse solution), which generally eludes MINRES. In all cases, it overcomes a potential instability in the original MINRES algorithm. A positive-definite pre-conditioner may be supplied. Our FORTRAN 90 implementation illustrates a design pattern that allows users to make problem data known to the solver but hidden and secure from other program units. In particular, we circumvent the need for reverse communication. Example test programs input and solve real or complex problems specified in Matrix Market format. While we focus here on a FORTRAN 90 implementation, we also provide and maintain MATLAB versions of MINRES and MINRES-QLP. PMID:25328255

  12. Sequential Least-Squares Using Orthogonal Transformations. [spacecraft communication/spacecraft tracking-data smoothing

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.

    1975-01-01

    Square root information estimation, starting from its beginnings in least-squares parameter estimation, is considered. Special attention is devoted to discussions of sensitivity and perturbation matrices, computed solutions and their formal statistics, consider-parameters and consider-covariances, and the effects of a priori statistics. The constant-parameter model is extended to include time-varying parameters and process noise, and the error analysis capabilities are generalized. Efficient and elegant smoothing results are obtained as easy consequences of the filter formulation. The value of the techniques is demonstrated by the navigation results that were obtained for the Mariner Venus-Mercury (Mariner 10) multiple-planetary space probe and for the Viking Mars space mission.

  13. The least-squares finite element method for low-mach-number compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Yu, Sheng-Tao

    1994-01-01

    The present paper reports the development of the Least-Squares Finite Element Method (LSFEM) for simulating compressible viscous flows at low Mach numbers in which the incompressible flows pose as an extreme. Conventional approach requires special treatments for low-speed flows calculations: finite difference and finite volume methods are based on the use of the staggered grid or the preconditioning technique; and, finite element methods rely on the mixed method and the operator-splitting method. In this paper, however, we show that such difficulty does not exist for the LSFEM and no special treatment is needed. The LSFEM always leads to a symmetric, positive-definite matrix through which the compressible flow equations can be effectively solved. Two numerical examples are included to demonstrate the method: first, driven cavity flows at various Reynolds numbers; and, buoyancy-driven flows with significant density variation. Both examples are calculated by using full compressible flow equations.

  14. Online soft sensor of humidity in PEM fuel cell based on dynamic partial least squares.

    PubMed

    Long, Rong; Chen, Qihong; Zhang, Liyan; Ma, Longhua; Quan, Shuhai

    2013-01-01

    Online monitoring humidity in the proton exchange membrane (PEM) fuel cell is an important issue in maintaining proper membrane humidity. The cost and size of existing sensors for monitoring humidity are prohibitive for online measurements. Online prediction of humidity using readily available measured data would be beneficial to water management. In this paper, a novel soft sensor method based on dynamic partial least squares (DPLS) regression is proposed and applied to humidity prediction in PEM fuel cell. In order to obtain data of humidity and test the feasibility of the proposed DPLS-based soft sensor a hardware-in-the-loop (HIL) test system is constructed. The time lag of the DPLS-based soft sensor is selected as 30 by comparing the root-mean-square error in different time lag. The performance of the proposed DPLS-based soft sensor is demonstrated by experimental results. PMID:24453923

  15. Online soft sensor of humidity in PEM fuel cell based on dynamic partial least squares.

    PubMed

    Long, Rong; Chen, Qihong; Zhang, Liyan; Ma, Longhua; Quan, Shuhai

    2013-01-01

    Online monitoring humidity in the proton exchange membrane (PEM) fuel cell is an important issue in maintaining proper membrane humidity. The cost and size of existing sensors for monitoring humidity are prohibitive for online measurements. Online prediction of humidity using readily available measured data would be beneficial to water management. In this paper, a novel soft sensor method based on dynamic partial least squares (DPLS) regression is proposed and applied to humidity prediction in PEM fuel cell. In order to obtain data of humidity and test the feasibility of the proposed DPLS-based soft sensor a hardware-in-the-loop (HIL) test system is constructed. The time lag of the DPLS-based soft sensor is selected as 30 by comparing the root-mean-square error in different time lag. The performance of the proposed DPLS-based soft sensor is demonstrated by experimental results.

  16. Bilinear least squares (BLLS) and molecular fluorescence in the quantification of the propranolol enantiomers.

    PubMed

    Valderrama, Patrícia; Poppi, Ronei Jesus

    2008-08-01

    The quantification of mixtures of (R)- and (S)-propranolol (PRO) in the pure form and in the pharmaceutical preparations is described. The methodology is based on chiral recognition of propranolol by formation of an inclusion complex with beta-cyclodextrin (CD), a chiral auxiliary, in the presence of 1-butanol. The excitation-emission fluorescence surface and second-order multivariate calibration such as bilinear least squares (BLLS) and parallel factor analysis (PARAFAC) were used in the model development. BLLS performed better than PARAFAC, presenting relative mean error in the order of 3.5%, analytical sensitivity of 0.07% and 0.08%, detection limit of 0.23% and 0.28% and quantification limit of 0.71% and 0.84% for the pure form and pharmaceutical preparations, respectively. These results indicate that the proposed methodology can be an alternative for the determination of propranolol enantiomeric composition.

  17. A hybrid least squares support vector machines and GMDH approach for river flow forecasting

    NASA Astrophysics Data System (ADS)

    Samsudin, R.; Saad, P.; Shabri, A.

    2010-06-01

    This paper proposes a novel hybrid forecasting model, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM), known as GLSSVM. The GMDH is used to determine the useful input variables for LSSVM model and the LSSVM model which works as time series forecasting. In this study the application of GLSSVM for monthly river flow forecasting of Selangor and Bernam River are investigated. The results of the proposed GLSSVM approach are compared with the conventional artificial neural network (ANN) models, Autoregressive Integrated Moving Average (ARIMA) model, GMDH and LSSVM models using the long term observations of monthly river flow discharge. The standard statistical, the root mean square error (RMSE) and coefficient of correlation (R) are employed to evaluate the performance of various models developed. Experiment result indicates that the hybrid model was powerful tools to model discharge time series and can be applied successfully in complex hydrological modeling.

  18. A library least-squares approach for scatter correction in gamma-ray tomography

    NASA Astrophysics Data System (ADS)

    Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro

    2015-03-01

    Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.

  19. Algorithm 937: MINRES-QLP for Symmetric and Hermitian Linear Equations and Least-Squares Problems.

    PubMed

    Choi, Sou-Cheng T; Saunders, Michael A

    2014-02-01

    We describe algorithm MINRES-QLP and its FORTRAN 90 implementation for solving symmetric or Hermitian linear systems or least-squares problems. If the system is singular, MINRES-QLP computes the unique minimum-length solution (also known as the pseudoinverse solution), which generally eludes MINRES. In all cases, it overcomes a potential instability in the original MINRES algorithm. A positive-definite pre-conditioner may be supplied. Our FORTRAN 90 implementation illustrates a design pattern that allows users to make problem data known to the solver but hidden and secure from other program units. In particular, we circumvent the need for reverse communication. Example test programs input and solve real or complex problems specified in Matrix Market format. While we focus here on a FORTRAN 90 implementation, we also provide and maintain MATLAB versions of MINRES and MINRES-QLP.

  20. First-order system least squares for the pure traction problem in planar linear elasticity

    SciTech Connect

    Cai, Z.; Manteuffel, T.; McCormick, S.; Parter, S.

    1996-12-31

    This talk will develop two first-order system least squares (FOSLS) approaches for the solution of the pure traction problem in planar linear elasticity. Both are two-stage algorithms that first solve for the gradients of displacement, then for the displacement itself. One approach, which uses L{sup 2} norms to define the FOSLS functional, is shown under certain H{sup 2} regularity assumptions to admit optimal H{sup 1}-like performance for standard finite element discretization and standard multigrid solution methods that is uniform in the Poisson ratio for all variables. The second approach, which is based on H{sup -1} norms, is shown under general assumptions to admit optimal uniform performance for displacement flux in an L{sup 2} norm and for displacement in an H{sup 1} norm. These methods do not degrade as other methods generally do when the material properties approach the incompressible limit.

  1. A compact and accurate semi-global potential energy surface for malonaldehyde from constrained least squares regression

    SciTech Connect

    Mizukami, Wataru Tew, David P.; Habershon, Scott

    2014-10-14

    We present a new approach to semi-global potential energy surface fitting that uses the least absolute shrinkage and selection operator (LASSO) constrained least squares procedure to exploit an extremely flexible form for the potential function, while at the same time controlling the risk of overfitting and avoiding the introduction of unphysical features such as divergences or high-frequency oscillations. Drawing from a massively redundant set of overlapping distributed multi-dimensional Gaussian functions of inter-atomic separations we build a compact full-dimensional surface for malonaldehyde, fit to explicitly correlated coupled cluster CCSD(T)(F12*) energies with a root mean square deviations accuracy of 0.3%–0.5% up to 25 000 cm{sup −1} above equilibrium. Importance-sampled diffusion Monte Carlo calculations predict zero point energies for malonaldehyde and its deuterated isotopologue of 14 715.4(2) and 13 997.9(2) cm{sup −1} and hydrogen transfer tunnelling splittings of 21.0(4) and 3.2(4) cm{sup −1}, respectively, which are in excellent agreement with the experimental values of 21.583 and 2.915(4) cm{sup −1}.

  2. A compact and accurate semi-global potential energy surface for malonaldehyde from constrained least squares regression

    NASA Astrophysics Data System (ADS)

    Mizukami, Wataru; Habershon, Scott; Tew, David P.

    2014-10-01

    We present a new approach to semi-global potential energy surface fitting that uses the least absolute shrinkage and selection operator (LASSO) constrained least squares procedure to exploit an extremely flexible form for the potential function, while at the same time controlling the risk of overfitting and avoiding the introduction of unphysical features such as divergences or high-frequency oscillations. Drawing from a massively redundant set of overlapping distributed multi-dimensional Gaussian functions of inter-atomic separations we build a compact full-dimensional surface for malonaldehyde, fit to explicitly correlated coupled cluster CCSD(T)(F12*) energies with a root mean square deviations accuracy of 0.3%-0.5% up to 25 000 cm-1 above equilibrium. Importance-sampled diffusion Monte Carlo calculations predict zero point energies for malonaldehyde and its deuterated isotopologue of 14 715.4(2) and 13 997.9(2) cm-1 and hydrogen transfer tunnelling splittings of 21.0(4) and 3.2(4) cm-1, respectively, which are in excellent agreement with the experimental values of 21.583 and 2.915(4) cm-1.

  3. Temporal gravity field modeling based on least square collocation with short-arc approach

    NASA Astrophysics Data System (ADS)

    ran, jiangjun; Zhong, Min; Xu, Houze; Liu, Chengshu; Tangdamrongsub, Natthachet

    2014-05-01

    After the launch of the Gravity Recovery And Climate Experiment (GRACE) in 2002, several research centers have attempted to produce the finest gravity model based on different approaches. In this study, we present an alternative approach to derive the Earth's gravity field, and two main objectives are discussed. Firstly, we seek the optimal method to estimate the accelerometer parameters, and secondly, we intend to recover the monthly gravity model based on least square collocation method. The method has been paid less attention compared to the least square adjustment method because of the massive computational resource's requirement. The positions of twin satellites are treated as pseudo-observations and unknown parameters at the same time. The variance covariance matrices of the pseudo-observations and the unknown parameters are valuable information to improve the accuracy of the estimated gravity solutions. Our analyses showed that introducing a drift parameter as an additional accelerometer parameter, compared to using only a bias parameter, leads to a significant improvement of our estimated monthly gravity field. The gravity errors outside the continents are significantly reduced based on the selected set of the accelerometer parameters. We introduced the improved gravity model namely the second version of Institute of Geodesy and Geophysics, Chinese Academy of Sciences (IGG-CAS 02). The accuracy of IGG-CAS 02 model is comparable to the gravity solutions computed from the Geoforschungszentrum (GFZ), the Center for Space Research (CSR) and the NASA Jet Propulsion Laboratory (JPL). In term of the equivalent water height, the correlation coefficients over the study regions (the Yangtze River valley, the Sahara desert, and the Amazon) among four gravity models are greater than 0.80.

  4. On recursive least-squares filtering algorithms and implementations. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hsieh, Shih-Fu

    1990-01-01

    In many real-time signal processing applications, fast and numerically stable algorithms for solving least-squares problems are necessary and important. In particular, under non-stationary conditions, these algorithms must be able to adapt themselves to reflect the changes in the system and take appropriate adjustments to achieve optimum performances. Among existing algorithms, the QR-decomposition (QRD)-based recursive least-squares (RLS) methods have been shown to be useful and effective for adaptive signal processing. In order to increase the speed of processing and achieve high throughput rate, many algorithms are being vectorized and/or pipelined to facilitate high degrees of parallelism. A time-recursive formulation of RLS filtering employing block QRD will be considered first. Several methods, including a new non-continuous windowing scheme based on selectively rejecting contaminated data, were investigated for adaptive processing. Based on systolic triarrays, many other forms of systolic arrays are shown to be capable of implementing different algorithms. Various updating and downdating systolic algorithms and architectures for RLS filtering are examined and compared in details, which include Householder reflector, Gram-Schmidt procedure, and Givens rotation. A unified approach encompassing existing square-root-free algorithms is also proposed. For the sinusoidal spectrum estimation problem, a judicious method of separating the noise from the signal is of great interest. Various truncated QR methods are proposed for this purpose and compared to the truncated SVD method. Computer simulations provided for detailed comparisons show the effectiveness of these methods. This thesis deals with fundamental issues of numerical stability, computational efficiency, adaptivity, and VLSI implementation for the RLS filtering problems. In all, various new and modified algorithms and architectures are proposed and analyzed; the significance of any of the new method depends

  5. Least-squares dual characterization for ROI assessment in emission tomography

    NASA Astrophysics Data System (ADS)

    Ben Bouallègue, F.; Crouzet, J. F.; Dubois, A.; Buvat, I.; Mariano-Goulart, D.

    2013-06-01

    Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.

  6. [A hyperspectral subpixel target detection method based on inverse least squares method].

    PubMed

    Li, Qing-Bo; Nie, Xin; Zhang, Guang-Jun

    2009-01-01

    In the present paper, an inverse least square (ILS) method combined with the Mahalanobis distance outlier detection method is discussed to detect the subpixel target from the hyperspectral image. Firstly, the inverse model for the target spectrum and all the pixel spectra was established, in which the accurate target spectrum was obtained previously, and then the SNV algorithm was employed to preprocess each original pixel spectra separately. After the pretreatment, the regressive coefficient of ILS was calculated with partial least square (PLS) algorithm. Each point in the vector of regressive coefficient corresponds to a pixel in the image. The Mahalanobis distance was calculated with each point in the regressive coefficient vector. Because Mahalanobis distance stands for the extent to which samples deviate from the total population, the point with Mahalanobis distance larger than the 3sigma was regarded as the subpixel target. In this algorithm, no other prior information such as representative background spectrum or modeling of background is required, and only the target spectrum is needed. In addition, the result of the detection is insensitive to the complexity of background. This method was applied to AVIRIS remote sensing data. For this simulation experiment, AVIRIS remote sensing data was free downloaded from the NASA official websit, the spectrum of a ground object in the AVIRIS hyperspectral image was picked up as the target spectrum, and the subpixel target was simulated though a linear mixed method. The comparison of the subpixel detection result of the method mentioned above with that of orthogonal subspace projection method (OSP) was performed. The result shows that the performance of the ILS method is better than the traditional OSP method. The ROC (receive operating characteristic curve) and SNR were calculated, which indicates that the ILS method possesses higher detection accuracy and less computing time than the OSP algorithm. PMID:19385196

  7. Crystallographic evaluation of internal motion of human alpha-lactalbumin refined by full-matrix least-squares method.

    PubMed

    Harata, K; Abe, Y; Muraki, M

    1999-03-26

    The low temperature form of human alpha-lactalbumin (HAL) was crystallized from a 2H2O solution and its structure was refined to the R value of 0.119 at 1.15 A resolution by the full-matrix least-squares method. Average estimated standard deviations of atomic parameters for non-hydrogen atoms were 0.038 A for coordinates and 0.044 A2 for anisotropic temperature factors (Uij). The magnitude of equivalent isotropic temperature factors (Ueqv) was highly correlated with the distance from the molecular centroid and fitted to a quadratic equation as a function of atomic coordinates. The atomic thermal motion was rather isotropic in the core region and the anisotropy increased towards the molecular surface. The statistical analysis revealed the out-of-plane motion of main-chain oxygen atoms, indicating that peptide groups are in rotational vibration around a Calpha.Calpha axis. The TLS model, which describes the rigid-body motion in terms of translation, libration, and screw motions, was adopted for the evaluation of the molecular motion and the TLS parameters were determined by the least-squares fit to Uij. The reproduced Ueqvcal from the TLS parameters was in fair agreement with observed Ueqv, but differences were found in regions of residues, 5-22, 44-48, 70-75, and 121-123, where Ueqv was larger than Ueqvcal because of large local motions. To evaluate the internal motion of HAL, the contribution of the rigid-body motion was determined to be 42.4 % of Ueqv in magnitude, which was the highest estimation to satisfy the condition that the Uijint tensors of the internal motion have positive eigen values. The internal motion represented with atomic thermal ellipsoids clearly showed local motions different from those observed in chicken-type lysozymes which have a backbone structure very similar to HAL. The result indicates that the internal motion is closely related to biological function of proteins. PMID:10080897

  8. Chaotic time series prediction for prenatal exposure to polychlorinated biphenyls in umbilical cord blood using the least squares SEATR model

    PubMed Central

    Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia

    2016-01-01

    Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme’s validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field. PMID:27118260

  9. Chaotic time series prediction for prenatal exposure to polychlorinated biphenyls in umbilical cord blood using the least squares SEATR model

    NASA Astrophysics Data System (ADS)

    Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia

    2016-04-01

    Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme’s validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field.

  10. Chaotic time series prediction for prenatal exposure to polychlorinated biphenyls in umbilical cord blood using the least squares SEATR model.

    PubMed

    Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia

    2016-01-01

    Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme's validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field. PMID:27118260

  11. Multi-Window Classical Least Squares Multivariate Calibration Methods for Quantitative ICP-AES Analyses

    SciTech Connect

    CHAMBERS,WILLIAM B.; HAALAND,DAVID M.; KEENAN,MICHAEL R.; MELGAARD,DAVID K.

    1999-10-01

    The advent of inductively coupled plasma-atomic emission spectrometers (ICP-AES) equipped with charge-coupled-device (CCD) detector arrays allows the application of multivariate calibration methods to the quantitative analysis of spectral data. We have applied classical least squares (CLS) methods to the analysis of a variety of samples containing up to 12 elements plus an internal standard. The elements included in the calibration models were Ag, Al, As, Au, Cd, Cr, Cu, Fe, Ni, Pb, Pd, and Se. By performing the CLS analysis separately in each of 46 spectral windows and by pooling the CLS concentration results for each element in all windows in a statistically efficient manner, we have been able to significantly improve the accuracy and precision of the ICP-AES analyses relative to the univariate and single-window multivariate methods supplied with the spectrometer. This new multi-window CLS (MWCLS) approach simplifies the analyses by providing a single concentration determination for each element from all spectral windows. Thus, the analyst does not have to perform the tedious task of reviewing the results from each window in an attempt to decide the correct value among discrepant analyses in one or more windows for each element. Furthermore, it is not necessary to construct a spectral correction model for each window prior to calibration and analysis: When one or more interfering elements was present, the new MWCLS method was able to reduce prediction errors for a selected analyte by more than 2 orders of magnitude compared to the worst case single-window multivariate and univariate predictions. The MWCLS detection limits in the presence of multiple interferences are 15 rig/g (i.e., 15 ppb) or better for each element. In addition, errors with the new method are only slightly inflated when only a single target element is included in the calibration (i.e., knowledge of all other elements is excluded during calibration). The MWCLS method is found to be vastly

  12. Simultaneous determination of penicillin G salts by infrared spectroscopy: Evaluation of combining orthogonal signal correction with radial basis function-partial least squares regression

    NASA Astrophysics Data System (ADS)

    Talebpour, Zahra; Tavallaie, Roya; Ahmadi, Seyyed Hamid; Abdollahpour, Assem

    2010-09-01

    In this study, a new method for the simultaneous determination of penicillin G salts in pharmaceutical mixture via FT-IR spectroscopy combined with chemometrics was investigated. The mixture of penicillin G salts is a complex system due to similar analytical characteristics of components. Partial least squares (PLS) and radial basis function-partial least squares (RBF-PLS) were used to develop the linear and nonlinear relation between spectra and components, respectively. The orthogonal signal correction (OSC) preprocessing method was used to correct unexpected information, such as spectral overlapping and scattering effects. In order to compare the influence of OSC on PLS and RBF-PLS models, the optimal linear (PLS) and nonlinear (RBF-PLS) models based on conventional and OSC preprocessed spectra were established and compared. The obtained results demonstrated that OSC clearly enhanced the performance of both RBF-PLS and PLS calibration models. Also in the case of some nonlinear relation between spectra and component, OSC-RBF-PLS gave satisfactory results than OSC-PLS model which indicated that the OSC was helpful to remove extrinsic deviations from linearity without elimination of nonlinear information related to component. The chemometric models were tested on an external dataset and finally applied to the analysis commercialized injection product of penicillin G salts.

  13. Correspondence and Least Squares Analyses of Soil and Rock Compositions for the Viking Lander 1 and Pathfinder Sites

    NASA Technical Reports Server (NTRS)

    Larsen, K. W.; Arvidson, R. E.; Jolliff, B. L.; Clark, B. C.

    2000-01-01

    Correspondence and Least Squares Mixing Analysis techniques are applied to the chemical composition of Viking 1 soils and Pathfinder rocks and soils. Implications for the parent composition of local and global materials are discussed.

  14. Application of Partial Least Squares (PLS) Regression to Determine Landscape-Scale Aquatic Resource Vulnerability in the Ozark Mountains

    EPA Science Inventory

    Partial least squares (PLS) analysis offers a number of advantages over the more traditionally used regression analyses applied in landscape ecology to study the associations among constituents of surface water and landscapes. Common data problems in ecological studies include: s...

  15. Application of Partial Least Square (PLS) Regression to Determine Landscape-Scale Aquatic Resources Vulnerability in the Ozark Mountains

    EPA Science Inventory

    Partial least squares (PLS) analysis offers a number of advantages over the more traditionally used regression analyses applied in landscape ecology, particularly for determining the associations among multiple constituents of surface water and landscape configuration. Common dat...

  16. Distributed least-squares estimation of a remote chemical source via convex combination in wireless sensor networks.

    PubMed

    Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun

    2014-01-01

    This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source. PMID:24977387

  17. Fuzzy C-mean clustering on kinetic parameter estimation with generalized linear least square algorithm in SPECT

    NASA Astrophysics Data System (ADS)

    Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan

    2006-03-01

    Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.

  18. Distributed Least-Squares Estimation of a Remote Chemical Source via Convex Combination in Wireless Sensor Networks

    PubMed Central

    Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun

    2014-01-01

    This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source. PMID:24977387

  19. Distributed least-squares estimation of a remote chemical source via convex combination in wireless sensor networks.

    PubMed

    Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun

    2014-06-27

    This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.

  20. Multi-phase classification by a least-squares support vector machine approach in tomography images of geological samples

    NASA Astrophysics Data System (ADS)

    Khan, Faisal; Enzmann, Frieder; Kersten, Michael

    2016-03-01

    Image processing of X-ray-computed polychromatic cone-beam micro-tomography (μXCT) data of geological samples mainly involves artefact reduction and phase segmentation. For the former, the main beam-hardening (BH) artefact is removed by applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. A Matlab code for this approach is provided in the Appendix. The final BH-corrected image is extracted from the residual data or from the difference between the surface elevation values and the original grey-scale values. For the segmentation, we propose a novel least-squares support vector machine (LS-SVM, an algorithm for pixel-based multi-phase classification) approach. A receiver operating characteristic (ROC) analysis was performed on BH-corrected and uncorrected samples to show that BH correction is in fact an important prerequisite for accurate multi-phase classification. The combination of the two approaches was thus used to classify successfully three different more or less complex multi-phase rock core samples.

  1. Isotope pattern deconvolution for peptide mass spectrometry by non-negative least squares/least absolute deviation template matching

    PubMed Central

    2012-01-01

    Background The robust identification of isotope patterns originating from peptides being analyzed through mass spectrometry (MS) is often significantly hampered by noise artifacts and the interference of overlapping patterns arising e.g. from post-translational modifications. As the classification of the recorded data points into either ‘noise’ or ‘signal’ lies at the very root of essentially every proteomic application, the quality of the automated processing of mass spectra can significantly influence the way the data might be interpreted within a given biological context. Results We propose non-negative least squares/non-negative least absolute deviation regression to fit a raw spectrum by templates imitating isotope patterns. In a carefully designed validation scheme, we show that the method exhibits excellent performance in pattern picking. It is demonstrated that the method is able to disentangle complicated overlaps of patterns. Conclusions We find that regularization is not necessary to prevent overfitting and that thresholding is an effective and user-friendly way to perform feature selection. The proposed method avoids problems inherent in regularization-based approaches, comes with a set of well-interpretable parameters whose default configuration is shown to generalize well without the need for fine-tuning, and is applicable to spectra of different platforms. The R package IPPD implements the method and is available from the Bioconductor platform (http://bioconductor.fhcrc.org/help/bioc-views/devel/bioc/html/IPPD.html). PMID:23137144

  2. An error analysis of least-squares finite element method of velocity-pressure-vorticity formulation for Stokes problem

    NASA Technical Reports Server (NTRS)

    Chang, Ching L.; Jiang, Bo-Nan

    1990-01-01

    A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.

  3. Realizations and performances of least-squares estimation and Kalman filtering by systolic arrays

    SciTech Connect

    Chen, M.J.

    1987-01-01

    Fast least-squares (LS) estimation and Kalman-filtering algorithms utilizing systolic-array implementation are studied. Based on a generalized systolic QR algorithm, a modified LS method is proposed and shown to have superior computational and inter-cell connection complexities, and is more practical for systolic-array implementation. After whitening processing, the Kalman filter can be formulated as a SRIF data-processing problem followed by a simple LS operation. This approach simplifies the computational structure, and is more reliable when the system has singular or near singular coefficient matrix. To improve the throughput rate of the systolic Kalman filter, a topology for stripe QR processing is also proposed. By skewing the order of input matrices, a fully pipelined systolic Kalman-filtering operation can be achieved. With the number of processing units of the O(n/sup 2/), the system throughput rate becomes of the O(n). The numerical properties of the systolic LS estimation and the Kalman filtering algorithms under finite word-length effect are studied via analysis and computer simulations, and are compared with that of conventional approaches. Fault tolerance of the LS estimation algorithm is also discussed. It is shown that by using a simple bypass register, reasonable estimation performance is still possible for a transient defective processing unit.

  4. Partial Least Square Discriminant Analysis Discovered a Dietary Pattern Inversely Associated with Nasopharyngeal Carcinoma Risk

    PubMed Central

    Lo, Yen-Li; Pan, Wen-Harn; Hsu, Wan-Lun; Chien, Yin-Chu; Chen, Jen-Yang; Hsu, Mow-Ming; Lou, Pei-Jen; Chen, I-How; Hildesheim, Allan; Chen, Chien-Jen

    2016-01-01

    Evidence on the association between dietary component, dietary pattern and nasopharyngeal carcinoma (NPC) is scarce. A major challenge is the high degree of correlation among dietary constituents. We aimed to identify dietary pattern associated with NPC and to illustrate the dose-response relationship between the identified dietary pattern scores and the risk of NPC. Taking advantage of a matched NPC case–control study, data from a total of 319 incident cases and 319 matched controls were analyzed. Dietary pattern was derived employing partial least square discriminant analysis (PLS-DA) performed on energy-adjusted food frequencies derived from a 66-item food-frequency questionnaire. Odds ratios (ORs) and 95% confidence intervals (CIs) were estimated with multiple conditional logistic regression models, linking pattern scores and NPC risk. A high score of the PLS-DA derived pattern was characterized by high intakes of fruits, milk, fresh fish, vegetables, tea, and eggs ordered by loading values. We observed that one unit increase in the scores was associated with a significantly lower risk of NPC (ORadj = 0.73, 95% CI = 0.60–0.88) after controlling for potential confounders. Similar results were observed among Epstein-Barr virus seropositive subjects. An NPC protective diet is indicated with more phytonutrient-rich plant foods (fruits, vegetables), milk, other protein-rich foods (in particular fresh fish and eggs), and tea. This information may be used to design potential dietary regimen for NPC prevention. PMID:27249558

  5. Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches

    USGS Publications Warehouse

    Brooks, Wesley R.; Fienen, Michael N.; Corsi, Steven R.

    2013-01-01

    At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.

  6. Automatic retinal vessel classification using a Least Square-Support Vector Machine in VAMPIRE.

    PubMed

    Relan, D; MacGillivray, T; Ballerini, L; Trucco, E

    2014-01-01

    It is important to classify retinal blood vessels into arterioles and venules for computerised analysis of the vasculature and to aid discovery of disease biomarkers. For instance, zone B is the standardised region of a retinal image utilised for the measurement of the arteriole to venule width ratio (AVR), a parameter indicative of microvascular health and systemic disease. We introduce a Least Square-Support Vector Machine (LS-SVM) classifier for the first time (to the best of our knowledge) to label automatically arterioles and venules. We use only 4 image features and consider vessels inside zone B (802 vessels from 70 fundus camera images) and in an extended zone (1,207 vessels, 70 fundus camera images). We achieve an accuracy of 94.88% and 93.96% in zone B and the extended zone, respectively, with a training set of 10 images and a testing set of 60 images. With a smaller training set of only 5 images and the same testing set we achieve an accuracy of 94.16% and 93.95%, respectively. This experiment was repeated five times by randomly choosing 10 and 5 images for the training set. Mean classification accuracy are close to the above mentioned result. We conclude that the performance of our system is very promising and outperforms most recently reported systems. Our approach requires smaller training data sets compared to others but still results in a similar or higher classification rate.

  7. Multidimensional model of apathy in older adults using partial least squares--path modeling.

    PubMed

    Raffard, Stéphane; Bortolon, Catherine; Burca, Marianna; Gely-Nargeot, Marie-Christine; Capdevielle, Delphine

    2016-06-01

    Apathy defined as a mental state characterized by a lack of goal-directed behavior is prevalent and associated with poor functioning in older adults. The main objective of this study was to identify factors contributing to the distinct dimensions of apathy (cognitive, emotional, and behavioral) in older adults without dementia. One hundred and fifty participants (mean age, 80.42) completed self-rated questionnaires assessing apathy, emotional distress, anticipatory pleasure, motivational systems, physical functioning, quality of life, and cognitive functioning. Data were analyzed using partial least squares variance-based structural equation modeling in order to examine factors contributing to the three different dimensions of apathy in our sample. Overall, the different facets of apathy were associated with cognitive functioning, anticipatory pleasure, sensitivity to reward, and physical functioning, but the contribution of these different factors to the three dimensions of apathy differed significantly. More specifically, the impact of anticipatory pleasure and physical functioning was stronger for the cognitive than for emotional apathy. Conversely, the impact of sensibility to reward, although small, was slightly stronger on emotional apathy. Regarding behavioral apathy, again we found similar latent variables except for the cognitive functioning whose impact was not statistically significant. Our results highlight the need to take into account various mechanisms involved in the different facets of apathy in older adults without dementia, including not only cognitive factors but also motivational variables and aspects related to physical disability. Clinical implications are discussed.

  8. Positioning Based on Integration of Muti-Sensor Systems Using Kalman Filter and Least Square Adjustment

    NASA Astrophysics Data System (ADS)

    Omidalizarandi, M.; Cao, Z.

    2013-09-01

    Sensor fusion is to combine different sensor data from different sources in order to make a more accurate model. In this research, different sensors (Optical Speed Sensor, Bosch Sensor, Odometer, XSENS, Silicon and GPS receiver) have been utilized to obtain different kinds of datasets to implement the multi-sensor system and comparing the accuracy of the each sensor with other sensors. The scope of this research is to estimate the current position and orientation of the Van. The Van's position can also be estimated by integrating its velocity and direction over time. To make these components work, it needs an interface that can bridge each other in a data acquisition module. The interface of this research has been developed based on using Labview software environment. Data have been transferred to PC via A/D convertor (LabJack) and make a connection to PC. In order to synchronize all the sensors, calibration parameters of each sensor is determined in preparatory step. Each sensor delivers result in a sensor specific coordinate system that contains different location on the object, different definition of coordinate axes and different dimensions and units. Different test scenarios (Straight line approach and Circle approach) with different algorithms (Kalman Filter, Least square Adjustment) have been examined and the results of the different approaches are compared together.

  9. Least-squares finite element solution of 3D incompressible Navier-Stokes problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, Tsung-Liang; Povinelli, Louis A.

    1992-01-01

    Although significant progress has been made in the finite element solution of incompressible viscous flow problems. Development of more efficient methods is still needed before large-scale computation of 3D problems becomes feasible. This paper presents such a development. The most popular finite element method for the solution of incompressible Navier-Stokes equations is the classic Galerkin mixed method based on the velocity-pressure formulation. The mixed method requires the use of different elements to interpolate the velocity and the pressure in order to satisfy the Ladyzhenskaya-Babuska-Brezzi (LBB) condition for the existence of the solution. On the other hand, due to the lack of symmetry and positive definiteness of the linear equations arising from the mixed method, iterative methods for the solution of linear systems have been hard to come by. Therefore, direct Gaussian elimination has been considered the only viable method for solving the systems. But, for three-dimensional problems, the computer resources required by a direct method become prohibitively large. In order to overcome these difficulties, a least-squares finite element method (LSFEM) has been developed. This method is based on the first-order velocity-pressure-vorticity formulation. In this paper the LSFEM is extended for the solution of three-dimensional incompressible Navier-Stokes equations written in the following first-order quasi-linear velocity-pressure-vorticity formulation.

  10. TDRSS-user orbit determination using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, Mina V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

    1993-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), and operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the January 17-23, 1991, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were less than 40 meters after the filter had reached steady state.

  11. Evaluation of Landsat-4 orbit determination accuracy using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Feiertag, R.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

    1993-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite (TDRS) System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the May 18-24, 1992, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. During this period, there were two separate orbit-adjust maneuvers on one of the TDRSS spacecraft (TDRS-East) and one small orbit-adjust maneuver for Landsat-4. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were generally less than 30 meters after the filter had reached steady state.

  12. Distributed weighted least-squares estimation with fast convergence for large-scale systems☆

    PubMed Central

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976

  13. Large-scale computation of incompressible viscous flow by least-squares finite element method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.

  14. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data.

    PubMed

    Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P A; Schmid, Adrien W

    2016-03-31

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis.

  15. Least Squares Magnetic-Field Optimization for Portable Nuclear Magnetic Resonance Magnet Design

    SciTech Connect

    Paulsen, Jeffrey L; Franck, John; Demas, Vasiliki; Bouchard, Louis-S.

    2008-03-27

    Single-sided and mobile nuclear magnetic resonance (NMR) sensors have the advantages of portability, low cost, and low power consumption compared to conventional high-field NMR and magnetic resonance imaging (MRI) systems. We present fast, flexible, and easy-to-implement target field algorithms for mobile NMR and MRI magnet design. The optimization finds a global optimum ina cost function that minimizes the error in the target magnetic field in the sense of least squares. When the technique is tested on a ring array of permanent-magnet elements, the solution matches the classical dipole Halbach solution. For a single-sided handheld NMR sensor, the algorithm yields a 640 G field homogeneous to 16 100 ppm across a 1.9 cc volume located 1.5 cm above the top of the magnets and homogeneous to 32 200 ppm over a 7.6 cc volume. This regime is adequate for MRI applications. We demonstrate that the homogeneous region can be continuously moved away from the sensor by rotating magnet rod elements, opening the way for NMR sensors with adjustable"sensitive volumes."

  16. Quantitative analysis of mixed hydrofluoric and nitric acids using Raman spectroscopy with partial least squares regression.

    PubMed

    Kang, Gumin; Lee, Kwangchil; Park, Haesung; Lee, Jinho; Jung, Youngjean; Kim, Kyoungsik; Son, Boongho; Park, Hyoungkuk

    2010-06-15

    Mixed hydrofluoric and nitric acids are widely used as a good etchant for the pickling process of stainless steels. The cost reduction and the procedure optimization in the manufacturing process can be facilitated by optically detecting the concentration of the mixed acids. In this work, we developed a novel method which allows us to obtain the concentrations of hydrofluoric acid (HF) and nitric acid (HNO(3)) mixture samples with high accuracy. The experiments were carried out for the mixed acids which consist of the HF (0.5-3wt%) and the HNO(3) (2-12wt%) at room temperature. Fourier Transform Raman spectroscopy has been utilized to measure the concentration of the mixed acids HF and HNO(3), because the mixture sample has several strong Raman bands caused by the vibrational mode of each acid in this spectrum. The calibration of spectral data has been performed using the partial least squares regression method which is ideal for local range data treatment. Several figures of merit (FOM) were calculated using the concept of net analyte signal (NAS) to evaluate performance of our methodology.

  17. The LASSO and sparse least square regression methods for SNP selection in predicting quantitative traits.

    PubMed

    Feng, Zeny Z; Yang, Xiaojian; Subedi, Sanjeena; McNicholas, Paul D

    2012-01-01

    Recent work concerning quantitative traits of interest has focused on selecting a small subset of single nucleotide polymorphisms (SNPs) from amongst the SNPs responsible for the phenotypic variation of the trait. When considered as covariates, the large number of variables (SNPs) and their association with those in close proximity pose challenges for variable selection. The features of sparsity and shrinkage of regression coefficients of the least absolute shrinkage and selection operator (LASSO) method appear attractive for SNP selection. Sparse partial least squares (SPLS) is also appealing as it combines the features of sparsity in subset selection and dimension reduction to handle correlations amongst SNPs. In this paper we investigate application of the LASSO and SPLS methods for selecting SNPs that predict quantitative traits. We evaluate the performance of both methods with different criteria and under different scenarios using simulation studies. Results indicate that these methods can be effective in selecting SNPs that predict quantitative traits but are limited by some conditions. Both methods perform similarly overall but each exhibit advantages over the other in given situations. Both methods are applied to Canadian Holstein cattle data to compare their performance.

  18. Entropy and generalized least square methods in assessment of the regional value of streamgages

    USGS Publications Warehouse

    Markus, M.; Vernon, Knapp H.; Tasker, Gary D.

    2003-01-01

    The Illinois State Water Survey performed a study to assess the streamgaging network in the State of Illinois. One of the important aspects of the study was to assess the regional value of each station through an assessment of the information transfer among gaging records for low, average, and high flow conditions. This analysis was performed for the main hydrologic regions in the State, and the stations were initially evaluated using a new approach based on entropy analysis. To determine the regional value of each station within a region, several information parameters, including total net information, were defined based on entropy. Stations were ranked based on the total net information. For comparison, the regional value of the same stations was assessed using the generalized least square regression (GLS) method, developed by the US Geological Survey. Finally, a hybrid combination of GLS and entropy was created by including a function of the negative net information as a penalty function in the GLS. The weights of the combined model were determined to maximize the average correlation with the results of GLS and entropy. The entropy and GLS methods were evaluated using the high-flow data from southern Illinois stations. The combined method was compared with the entropy and GLS approaches using the high-flow data from eastern Illinois stations. ?? 2003 Elsevier B.V. All rights reserved.

  19. Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression

    SciTech Connect

    Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards; New, Joshua Ryan

    2013-01-01

    Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-fold cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.

  20. Objective chemical fingerprinting of oil spills by partial least-squares discriminant analysis.

    PubMed

    Gómez-Carracedo, M P; Ferré, J; Andrade, J M; Fernández-Varela, R; Boqué, R

    2012-06-01

    An objective method based on partial least-squares discriminant analysis (PLS-DA) was used to assign an oil lump collected on the coastline to a suspected source. The approach is an add-on to current US and European oil fingerprinting standard procedures that are based on lengthy and rather subjective visual comparison of chromatograms. The procedure required an initial variable selection step using the selectivity ratio index (SRI) followed by a PLS-DA model. From the model, a "matching decision diagram" was established that yielded the four possible decisions that may arise from standard procedures (i.e., match, non-match, probable match, and inconclusive). The decision diagram included two limits, one derived from the Q-residuals of the samples of the target class and the other derived from the predicted y of the PLS model. The method was used classify 45 oil lumps collected on the Galician coast after the Prestige wreckage. The results compared satisfactorily with those from the standard methods.

  1. Passive shimming of a superconducting magnet using the L1-norm regularized least square algorithm

    NASA Astrophysics Data System (ADS)

    Kong, Xia; Zhu, Minhua; Xia, Ling; Wang, Qiuliang; Li, Yi; Zhu, Xuchen; Liu, Feng; Crozier, Stuart

    2016-02-01

    The uniformity of the static magnetic field B0 is of prime importance for an MRI system. The passive shimming technique is usually applied to improve the uniformity of the static field by optimizing the layout of a series of steel shims. The steel pieces are fixed in the drawers in the inner bore of the superconducting magnet, and produce a magnetizing field in the imaging region to compensate for the inhomogeneity of the B0 field. In practice, the total mass of steel used for shimming should be minimized, in addition to the field uniformity requirement. This is because the presence of steel shims may introduce a thermal stability problem. The passive shimming procedure is typically realized using the linear programming (LP) method. The LP approach however, is generally slow and also has difficulty balancing the field quality and the total amount of steel for shimming. In this paper, we have developed a new algorithm that is better able to balance the dual constraints of field uniformity and the total mass of the shims. The least square method is used to minimize the magnetic field inhomogeneity over the imaging surface with the total mass of steel being controlled by an L1-norm based constraint. The proposed algorithm has been tested with practical field data, and the results show that, with similar computational cost and mass of shim material, the new algorithm achieves superior field uniformity (43% better for the test case) compared with the conventional linear programming approach.

  2. Interpolation of Superconducting Gravity Observations Using Least-Squares Collocation Method

    NASA Astrophysics Data System (ADS)

    Habel, Branislav; Janak, Juraj

    2014-05-01

    A pre-processing of the gravity data measured by superconducting gravimeter involves removing of spikes, offsets and gaps. Their presence in observations can limit the data analysis and degrades the quality of obtained results. Short data gaps are filling by theoretical signal in order to get continuous records of gravity. It requires the accurate tidal model and eventually atmospheric pressure at the observed site. The poster presents a design of algorithm for interpolation of gravity observations with a sampling rate of 1 min. Novel approach is based on least-squares collocation which combines adjustment of trend parameters, filtering of noise and prediction. It allows the interpolation of missing data up to a few hours without necessity of any other information. Appropriate parameters for covariance function are found using a Bayes' theorem by modified optimization process. Accuracy of method is improved by the rejection of outliers before interpolation. For filling of longer gaps the collocation model is combined with theoretical tidal signal for the rigid Earth. Finally, the proposed method was tested on the superconducting gravity observations at several selected stations of Global Geodynamics Project. Testing demonstrates its reliability and offers results comparable with the standard approach implemented in ETERNA software package without necessity of an accurate tidal model.

  3. Relative Scale Estimation and 3D Registration of Multi-Modal Geometry Using Growing Least Squares.

    PubMed

    Mellado, Nicolas; Dellepiane, Matteo; Scopigno, Roberto

    2016-09-01

    The advent of low cost scanning devices and the improvement of multi-view stereo techniques have made the acquisition of 3D geometry ubiquitous. Data gathered from different devices, however, result in large variations in detail, scale, and coverage. Registration of such data is essential before visualizing, comparing and archiving them. However, state-of-the-art methods for geometry registration cannot be directly applied due to intrinsic differences between the models, e.g., sampling, scale, noise. In this paper we present a method for the automatic registration of multi-modal geometric data, i.e., acquired by devices with different properties (e.g., resolution, noise, data scaling). The method uses a descriptor based on Growing Least Squares, and is robust to noise, variation in sampling density, details, and enables scale-invariant matching. It allows not only the measurement of the similarity between the geometry surrounding two points, but also the estimation of their relative scale. As it is computed locally, it can be used to analyze large point clouds composed of millions of points. We implemented our approach in two registration procedures (assisted and automatic) and applied them successfully on a number of synthetic and real cases. We show that using our method, multi-modal models can be automatically registered, regardless of their differences in noise, detail, scale, and unknown relative coverage.

  4. Passive shimming of a superconducting magnet using the L1-norm regularized least square algorithm.

    PubMed

    Kong, Xia; Zhu, Minhua; Xia, Ling; Wang, Qiuliang; Li, Yi; Zhu, Xuchen; Liu, Feng; Crozier, Stuart

    2016-02-01

    The uniformity of the static magnetic field B0 is of prime importance for an MRI system. The passive shimming technique is usually applied to improve the uniformity of the static field by optimizing the layout of a series of steel shims. The steel pieces are fixed in the drawers in the inner bore of the superconducting magnet, and produce a magnetizing field in the imaging region to compensate for the inhomogeneity of the B0 field. In practice, the total mass of steel used for shimming should be minimized, in addition to the field uniformity requirement. This is because the presence of steel shims may introduce a thermal stability problem. The passive shimming procedure is typically realized using the linear programming (LP) method. The LP approach however, is generally slow and also has difficulty balancing the field quality and the total amount of steel for shimming. In this paper, we have developed a new algorithm that is better able to balance the dual constraints of field uniformity and the total mass of the shims. The least square method is used to minimize the magnetic field inhomogeneity over the imaging surface with the total mass of steel being controlled by an L1-norm based constraint. The proposed algorithm has been tested with practical field data, and the results show that, with similar computational cost and mass of shim material, the new algorithm achieves superior field uniformity (43% better for the test case) compared with the conventional linear programming approach. PMID:26784397

  5. [Band depth analysis and partial least square regression based winter wheat biomass estimation using hyperspectral measurements].

    PubMed

    Fu, Yuan-Yuan; Wang, Ji-Hua; Yang, Gui-Jun; Song, Xiao-Yu; Xu, Xin-Gang; Feng, Hai-Kuan

    2013-05-01

    The major limitation of using existing vegetation indices for crop biomass estimation is that it approaches a saturation level asymptotically for a certain range of biomass. In order to resolve this problem, band depth analysis and partial least square regression (PLSR) were combined to establish winter wheat biomass estimation model in the present study. The models based on the combination of band depth analysis and PLSR were compared with the models based on common vegetation indexes from the point of view of estimation accuracy, subsequently. Band depth analysis was conducted in the visible spectral domain (550-750 nm). Band depth, band depth ratio (BDR), normalized band depth index, and band depth normalized to area were utilized to represent band depth information. Among the calibrated estimation models, the models based on the combination of band depth analysis and PLSR reached higher accuracy than those based on the vegetation indices. Among them, the combination of BDR and PLSR got the highest accuracy (R2 = 0.792, RMSE = 0.164 kg x m(-2)). The results indicated that the combination of band depth analysis and PLSR could well overcome the saturation problem and improve the biomass estimation accuracy when winter wheat biomass is large.

  6. QSAR and QSPR model interpretation using partial least squares (PLS) analysis.

    PubMed

    Stanton, David T

    2012-06-01

    Carefully developed quantitative structure-activity and structure-property relationship models contain detailed information regarding how differences in the molecular structure of compounds correlate with differences in the observed biological or other physicochemical properties of those compounds. The ability to understand the behavior of existing molecules and to design new molecules is facilitated by using an objective method to extract and explain the details of the underlying structure-activity or structure-property relationship. Furthermore, a clear understanding of how and why compounds behave as they do can lead to new innovations through model-directed selection of compounds to be used in complex mixtures such as laundry detergents, fabric softeners, and shampoos. Such a method has been developed based on partial least-squares (PLS) regression analysis that allows for the identification of specific structural trends that relate to differences in observed properties. But the analysis of the completed model is only the last step of the process. The model development process itself affects the ability to extract a clear interpretation of the model. Everything from the selection of initial pool of molecular descriptors to evaluate to data set and model optimization impacts the ability to derive detailed molecular design information. This review describes the method details and examples of the use of PLS for model interpretation and also outlines suggestions regarding model development and model and data set optimization that enable the interpretation process. PMID:22497466

  7. Multidimensional model of apathy in older adults using partial least squares--path modeling.

    PubMed

    Raffard, Stéphane; Bortolon, Catherine; Burca, Marianna; Gely-Nargeot, Marie-Christine; Capdevielle, Delphine

    2016-06-01

    Apathy defined as a mental state characterized by a lack of goal-directed behavior is prevalent and associated with poor functioning in older adults. The main objective of this study was to identify factors contributing to the distinct dimensions of apathy (cognitive, emotional, and behavioral) in older adults without dementia. One hundred and fifty participants (mean age, 80.42) completed self-rated questionnaires assessing apathy, emotional distress, anticipatory pleasure, motivational systems, physical functioning, quality of life, and cognitive functioning. Data were analyzed using partial least squares variance-based structural equation modeling in order to examine factors contributing to the three different dimensions of apathy in our sample. Overall, the different facets of apathy were associated with cognitive functioning, anticipatory pleasure, sensitivity to reward, and physical functioning, but the contribution of these different factors to the three dimensions of apathy differed significantly. More specifically, the impact of anticipatory pleasure and physical functioning was stronger for the cognitive than for emotional apathy. Conversely, the impact of sensibility to reward, although small, was slightly stronger on emotional apathy. Regarding behavioral apathy, again we found similar latent variables except for the cognitive functioning whose impact was not statistically significant. Our results highlight the need to take into account various mechanisms involved in the different facets of apathy in older adults without dementia, including not only cognitive factors but also motivational variables and aspects related to physical disability. Clinical implications are discussed. PMID:27153818

  8. IUPAC-consistent approach to the limit of detection in partial least-squares calibration.

    PubMed

    Allegrini, Franco; Olivieri, Alejandro C

    2014-08-01

    There is currently no well-defined procedure for providing the limit of detection (LOD) in multivariate calibration. Defining an estimator for the LOD in this scenario has shown to be more complex than intuitively extending the traditional univariate definition. For these reasons, although many attempts have been made to arrive at a reasonable convention, additional effort is required to achieve full agreement between the univariate and multivariate LOD definitions. In this work, a novel approach is presented to estimate the LOD in partial least-squares (PLS) calibration. Instead of a single LOD value, an interval of LODs is provided, which depends on the variation of the background composition in the calibration space. This is in contrast with previously proposed univariate extensions of the LOD concept. With the present definition, the LOD interval becomes a parameter characterizing the overall PLS calibration model, and not each test sample in particular, as has been proposed in the past. The new approach takes into account IUPAC official recommendations, and also the latest developments in error-in-variables theory for PLS calibration. Both simulated and real analytical systems have been studied for illustrating the properties of the new LOD concept. PMID:25008998

  9. Least squares twin support vector machine with Universum data for classification

    NASA Astrophysics Data System (ADS)

    Xu, Yitian; Chen, Mei; Li, Guohui

    2016-11-01

    Universum, a third class not belonging to either class of the classification problem, allows to incorporate the prior knowledge into the learning process. A lot of previous work have demonstrated that the Universum is helpful to the supervised and semi-supervised classification. Moreover, Universum has already been introduced into the support vector machine (SVM) and twin support vector machine (TSVM) to enhance the generalisation performance. To further increase the generalisation performance, we propose a least squares TSVM with Universum data (?-TSVM) in this paper. Our ?-TSVM possesses the following advantages: first, it exploits Universum data to improve generalisation performance. Besides, it implements the structural risk minimisation principle by adding a regularisation to the objective function. Finally, it costs less computing time by solving two small-sized systems of linear equations instead of a single larger-sized quadratic programming problem. To verify the validity of our proposed algorithm, we conduct various experiments around the size of labelled samples and the number of Universum data on data-sets including seven benchmark data-sets, Toy data, MNIST and Face images. Empirical experiments indicate that Universum contributes to making prediction accuracy improved even stable. Especially when fewer labelled samples given, ?-TSVM is far superior to the improved LS-TSVM (ILS-TSVM), and slightly superior to the ?-TSVM.

  10. Detection of main tidal frequencies using least squares harmonic estimation method

    NASA Astrophysics Data System (ADS)

    Mousavian, R.; Hossainali, M. Mashhadi

    2012-11-01

    In this paper the efficiency of the method of Least Squares Harmonic Estimation (LS-HE) for detecting the main tidal frequencies is investigated. Using this method, the tidal spectrum of the sea level data is evaluated at two tidal stations: Bandar Abbas in south of Iran and Workington on the eastern coast of the UK. The amplitudes of the tidal constituents at these two tidal stations are not the same. Moreover, in contrary to the Workington station, the Bandar Abbas tidal record is not an equispaced time series. Therefore, the analysis of the hourly tidal observations in Bandar Abbas and Workington can provide a reasonable insight into the efficiency of this method for analyzing the frequency content of tidal time series. Furthermore, applying the method of Fourier transform to the Workington tidal record provides an independent source of information for evaluating the tidal spectrum proposed by the LS-HE method. According to the obtained results, the spectrums of these two tidal records contain the components with the maximum amplitudes among the expected ones in this time span and some new frequencies in the list of known constituents. In addition, in terms of frequencies with maximum amplitude; the power spectrums derived from two aforementioned methods are the same. These results demonstrate the ability of LS-HE for identifying the frequencies with maximum amplitude in both tidal records.

  11. A Least Squares Collocation Approach with GOCE gravity gradients for regional Moho-estimation

    NASA Astrophysics Data System (ADS)

    Rieser, Daniel; Mayer-Guerr, Torsten

    2014-05-01

    The depth of the Moho discontinuity is commonly derived by either seismic observations, gravity measurements or combinations of both. In this study, we aim to use the gravity gradient measurements of the GOCE satellite mission in a Least Squares Collocation (LSC) approach for the estimation of the Moho depth on regional scale. Due to its mission configuration and measurement setup, GOCE is able to contribute valuable information in particular in the medium wavelengths of the gravity field spectrum, which is also of special interest for the crust-mantle boundary. In contrast to other studies we use the full information of the gradient tensor in all three dimensions. The problem outline is formulated as isostatically compensated topography according to the Airy-Heiskanen model. By using a topography model in spherical harmonics representation the topographic influences can be reduced from the gradient observations. Under the assumption of constant mantle and crustal densities, surface densities are directly derived by LSC on regional scale, which in turn are converted in Moho depths. First investigations proofed the ability of this method to resolve the gravity inversion problem already with a small amount of GOCE data and comparisons with other seismic and gravitmetric Moho models for the European region show promising results. With the recently reprocessed GOCE gradients, an improved data set shall be used for the derivation of the Moho depth. In this contribution the processing strategy will be introduced and the most recent developments and results using the currently available GOCE data shall be presented.

  12. Estimation of active pharmaceutical ingredients content using locally weighted partial least squares and statistical wavelength selection.

    PubMed

    Kim, Sanghong; Kano, Manabu; Nakagawa, Hiroshi; Hasebe, Shinji

    2011-12-15

    Development of quality estimation models using near infrared spectroscopy (NIRS) and multivariate analysis has been accelerated as a process analytical technology (PAT) tool in the pharmaceutical industry. Although linear regression methods such as partial least squares (PLS) are widely used, they cannot always achieve high estimation accuracy because physical and chemical properties of a measuring object have a complex effect on NIR spectra. In this research, locally weighted PLS (LW-PLS) which utilizes a newly defined similarity between samples is proposed to estimate active pharmaceutical ingredient (API) content in granules for tableting. In addition, a statistical wavelength selection method which quantifies the effect of API content and other factors on NIR spectra is proposed. LW-PLS and the proposed wavelength selection method were applied to real process data provided by Daiichi Sankyo Co., Ltd., and the estimation accuracy was improved by 38.6% in root mean square error of prediction (RMSEP) compared to the conventional PLS using wavelengths selected on the basis of variable importance on the projection (VIP). The results clearly show that the proposed calibration modeling technique is useful for API content estimation and is superior to the conventional one. PMID:22001843

  13. Least-squares reverse-time migration with cost-effective computation and memory storage

    NASA Astrophysics Data System (ADS)

    Liu, Xuejian; Liu, Yike; Huang, Xiaogang; Li, Peng

    2016-06-01

    Least-squares reverse-time migration (LSRTM), which involves several iterations of reverse-time migration (RTM) and Born modeling procedures, can provide subsurface images with better balanced amplitudes, higher resolution and fewer artifacts than standard migration. However, the same source wavefield is repetitively computed during the Born modeling and RTM procedures of different iterations. We developed a new LSRTM method with modified excitation-amplitude imaging conditions, where the source wavefield for RTM is forward propagated only once while the maximum amplitude and its excitation-time at each grid are stored. Then, the RTM procedure of different iterations only involves: (1) backward propagation of the residual between Born modeled and acquired data, and (2) implementation of the modified excitation-amplitude imaging condition by multiplying the maximum amplitude by the back propagated data residuals only at the grids that satisfy the imaging time at each time-step. For a complex model, 2 or 3 local peak-amplitudes and corresponding traveltimes should be confirmed and stored for all the grids so that multiarrival information of the source wavefield can be utilized for imaging. Numerical experiments on a three-layer and the Marmousi2 model demonstrate that the proposed LSRTM method saves huge computation and memory cost.

  14. An efficient recursive least square-based condition monitoring approach for a rail vehicle suspension system

    NASA Astrophysics Data System (ADS)

    Liu, X. Y.; Alfi, S.; Bruni, S.

    2016-06-01

    A model-based condition monitoring strategy for the railway vehicle suspension is proposed in this paper. This approach is based on recursive least square (RLS) algorithm focusing on the deterministic 'input-output' model. RLS has Kalman filtering feature and is able to identify the unknown parameters from a noisy dynamic system by memorising the correlation properties of variables. The identification of suspension parameter is achieved by machine learning of the relationship between excitation and response in a vehicle dynamic system. A fault detection method for the vertical primary suspension is illustrated as an instance of this condition monitoring scheme. Simulation results from the rail vehicle dynamics software 'ADTreS' are utilised as 'virtual measurements' considering a trailer car of Italian ETR500 high-speed train. The field test data from an E464 locomotive are also employed to validate the feasibility of this strategy for the real application. Results of the parameter identification performed indicate that estimated suspension parameters are consistent or approximate with the reference values. These results provide the supporting evidence that this fault diagnosis technique is capable of paving the way for the future vehicle condition monitoring system.

  15. Prediction for human intelligence using morphometric characteristics of cortical surface: partial least square analysis.

    PubMed

    Yang, J-J; Yoon, U; Yun, H J; Im, K; Choi, Y Y; Lee, K H; Park, H; Hough, M G; Lee, J-M

    2013-08-29

    A number of imaging studies have reported neuroanatomical correlates of human intelligence with various morphological characteristics of the cerebral cortex. However, it is not yet clear whether these morphological properties of the cerebral cortex account for human intelligence. We assumed that the complex structure of the cerebral cortex could be explained effectively considering cortical thickness, surface area, sulcal depth and absolute mean curvature together. In 78 young healthy adults (age range: 17-27, male/female: 39/39), we used the full-scale intelligence quotient (FSIQ) and the cortical measurements calculated in native space from each subject to determine how much combining various cortical measures explained human intelligence. Since each cortical measure is thought to be not independent but highly inter-related, we applied partial least square (PLS) regression, which is one of the most promising multivariate analysis approaches, to overcome multicollinearity among cortical measures. Our results showed that 30% of FSIQ was explained by the first latent variable extracted from PLS regression analysis. Although it is difficult to relate the first derived latent variable with specific anatomy, we found that cortical thickness measures had a substantial impact on the PLS model supporting the most significant factor accounting for FSIQ. Our results presented here strongly suggest that the new predictor combining different morphometric properties of complex cortical structure is well suited for predicting human intelligence. PMID:23643979

  16. A technique to improve the accuracy of Earth orientation prediction algorithms based on least squares extrapolation

    NASA Astrophysics Data System (ADS)

    Guo, J. Y.; Li, Y. B.; Dai, C. L.; Shum, C. K.

    2013-10-01

    We present a technique to improve the least squares (LS) extrapolation of Earth orientation parameters (EOPs), consisting of fixing the last observed data point on the LS extrapolation curve, which customarily includes a polynomial and a few sinusoids. For the polar motion (PM), a more sophisticated two steps approach has been developed, which consists of estimating the amplitude of the more stable one of the annual (AW) and Chandler (CW) wobbles using data of longer time span, and then estimating the other parameters using a shorter time span. The technique is studied using hindcast experiments, and justified using year-by-year statistics of 8 years. In order to compare with the official predictions of the International Earth Rotation and Reference Systems Service (IERS) performed at the U.S. Navy Observatory (USNO), we have enforced short-term predictions by applying the ARIMA method to the residuals computed by subtracting the LS extrapolation curve from the observation data. The same as at USNO, we have also used atmospheric excitation function (AEF) to further improve predictions of UT1-UTC. As results, our short-term predictions are comparable to the USNO predictions, and our long-term predictions are marginally better, although not for every year. In addition, we have tested the use of AEF and oceanic excitation function (OEF) in PM prediction. We find that use of forecasts of AEF alone does not lead to any apparent improvement or worsening, while use of forecasts of AEF + OEF does lead to apparent improvement.

  17. First-order system least-squares for the Helmholtz equation

    SciTech Connect

    Lee, B.; Manteuffel, T.; McCormick, S.; Ruge, J.

    1996-12-31

    We apply the FOSLS methodology to the exterior Helmholtz equation {Delta}p + k{sup 2}p = 0. Several least-squares functionals, some of which include both H{sup -1}({Omega}) and L{sup 2}({Omega}) terms, are examined. We show that in a special subspace of [H(div; {Omega}) {intersection} H(curl; {Omega})] x H{sup 1}({Omega}), each of these functionals are equivalent independent of k to a scaled H{sup 1}({Omega}) norm of p and u = {del}p. This special subspace does not include the oscillatory near-nullspace components ce{sup ik}({sup {alpha}x+{beta}y)}, where c is a complex vector and where {alpha}{sub 2} + {beta}{sup 2} = 1. These components are eliminated by applying a non-standard coarsening scheme. We achieve this scheme by introducing {open_quotes}ray{close_quotes} basis functions which depend on the parameter pair ({alpha}, {beta}), and which approximate ce{sup ik}({sup {alpha}x+{beta}y)} well on the coarser levels where bilinears cannot. We use several pairs of these parameters on each of these coarser levels so that several coarse grid problems are spun off from the finer levels. Some extensions of this theory to the transverse electric wave solution for Maxwell`s equations will also be presented.

  18. Weighted Least-Squares Finite Element Method for Cardiac Blood Flow Simulation with Echocardiographic Data

    PubMed Central

    Wei, Fei; Westerdale, John; McMahon, Eileen M.; Belohlavek, Marek; Heys, Jeffrey J.

    2012-01-01

    As both fluid flow measurement techniques and computer simulation methods continue to improve, there is a growing need for numerical simulation approaches that can assimilate experimental data into the simulation in a flexible and mathematically consistent manner. The problem of interest here is the simulation of blood flow in the left ventricle with the assimilation of experimental data provided by ultrasound imaging of microbubbles in the blood. The weighted least-squares finite element method is used because it allows data to be assimilated in a very flexible manner so that accurate measurements are more closely matched with the numerical solution than less accurate data. This approach is applied to two different test problems: a flexible flap that is displaced by a jet of fluid and blood flow in the porcine left ventricle. By adjusting how closely the simulation matches the experimental data, one can observe potential inaccuracies in the model because the simulation without experimental data differs significantly from the simulation with the data. Additionally, the assimilation of experimental data can help the simulation capture certain small effects that are present in the experiment, but not modeled directly in the simulation. PMID:22312412

  19. Temporal parameter change of human postural control ability during upright swing using recursive least square method

    NASA Astrophysics Data System (ADS)

    Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi

    2010-01-01

    The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.

  20. Temporal parameter change of human postural control ability during upright swing using recursive least square method

    NASA Astrophysics Data System (ADS)

    Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi

    2009-12-01

    The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.