Science.gov

Sample records for galerkin least-squares solutions

  1. A Galerkin least squares approach to viscoelastic flow.

    SciTech Connect

    Rao, Rekha R.; Schunk, Peter Randall

    2015-10-01

    A Galerkin/least-squares stabilization technique is applied to a discrete Elastic Viscous Stress Splitting formulation of for viscoelastic flow. From this, a possible viscoelastic stabilization method is proposed. This method is tested with the flow of an Oldroyd-B fluid past a rigid cylinder, where it is found to produce inaccurate drag coefficients. Furthermore, it fails for relatively low Weissenberg number indicating it is not suited for use as a general algorithm. In addition, a decoupled approach is used as a way separating the constitutive equation from the rest of the system. A Pressure Poisson equation is used when the velocity and pressure are sought to be decoupled, but this fails to produce a solution when inflow/outflow boundaries are considered. However, a coupled pressure-velocity equation with a decoupled constitutive equation is successful for the flow past a rigid cylinder and seems to be suitable as a general-use algorithm.

  2. Compressible flow calculations employing the Galerkin/least-squares method

    NASA Technical Reports Server (NTRS)

    Shakib, F.; Hughes, T. J. R.; Johan, Zdenek

    1989-01-01

    A multielement group, domain decomposition algorithm is presented for solving linear nonsymmetric systems arising in the finite-element analysis of compressible flows employing the Galerkin/least-squares method. The iterative strategy employed is based on the generalized minimum residual (GMRES) procedure originally proposed by Saad and Shultz. Two levels of preconditioning are investigated. Applications to problems of high-speed compressible flow illustrate the effectiveness of the scheme.

  3. Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem

    SciTech Connect

    Yoo, Jaechil

    1996-12-31

    Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.

  4. Preprocessing Inconsistent Linear System for a Meaningful Least Squares Solution

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; Shaykhian, Gholam Ali

    2011-01-01

    Mathematical models of many physical/statistical problems are systems of linear equations. Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.

  5. On matrix factorization and efficient least squares solution.

    NASA Astrophysics Data System (ADS)

    Schwarzenberg-Czerny, A.

    1995-04-01

    Least squares solution of ill conditioned normal equations by Cholesky-Banachiewicz (ChB) factorization suffers from numerical problems related to near singularity and loss of accuracy. We demonstrate that the near singularity does not arise for correctly posed statistical problems. The accuracy loss is also immaterial since for nonlinear least squares the solution by Newton Raphson iterations yields machine accuracy with no regard for accuracy of an individual iteration (Wilkinson 1963). Since this accuracy may not be achieved using singular value decomposition (SVD) without additional iterations for differential corrections and since SVD is more demanding in terms of number of operations and particularly in terms of required memory, we argue that ChB factorization remains the algorithm of choice for least squares. We present a new, very compact implementation in code of Cholesky (1924) and Banachiewicz (1938b) factorization in an elegant form proposed by Banachiewicz (1942). Source listing of the code is provided. We point out that in the same publication Banachiewicz (1938) discovered LU factorization of square matrices before Crout (1941) and rediscovered factorization of the symmetric matrices after Cholesky (1924). Since the two algorithms became confused, no due credit is given to Banachiewicz in modern literature.

  6. Least-squares solution of ill-conditioned systems. II

    NASA Astrophysics Data System (ADS)

    Branham, R. L., Jr.

    1980-11-01

    A singular-value analysis of normal equations from observations of minor planets 6 (Hebe), 7 (Iris), 8 (Flora), 9 (Metis), and 15 (Eunomia) is undertaken to determine corrections to a number of astronomical parameters, particularly the equinox correction for the FK4. In a previous investigation the test for small singular values was criticized because it resulted in discordant equinox determinations. Here it is shown that none of the tests employed by singular-value analysis leads to solutions superior to those given by classical least squares. It is concluded that singular-value analysis has legitimate uses in astronomy, but that it is misapplied when employed to estimate astronomical parameters in a well defined model. Also discussed is the question of whether it is preferable to reduce the equations of condition by orthogonal transformations rather than to form normal equations. Some suggestions are made regarding the desirability of planning observational programs in such a way that the observations do not lead to extremely ill-conditioned systems.

  7. Fast algorithm for the solution of large-scale non-negativity constrained least squares problems.

    SciTech Connect

    Van Benthem, Mark Hilary; Keenan, Michael Robert

    2004-06-01

    Algorithms for multivariate image analysis and other large-scale applications of multivariate curve resolution (MCR) typically employ constrained alternating least squares (ALS) procedures in their solution. The solution to a least squares problem under general linear equality and inequality constraints can be reduced to the solution of a non-negativity-constrained least squares (NNLS) problem. Thus the efficiency of the solution to any constrained least square problem rests heavily on the underlying NNLS algorithm. We present a new NNLS solution algorithm that is appropriate to large-scale MCR and other ALS applications. Our new algorithm rearranges the calculations in the standard active set NNLS method on the basis of combinatorial reasoning. This rearrangement serves to reduce substantially the computational burden required for NNLS problems having large numbers of observation vectors.

  8. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  9. Preprocessing in Matlab Inconsistent Linear System for a Meaningful Least Squares Solution

    NASA Technical Reports Server (NTRS)

    Sen, Symal K.; Shaykhian, Gholam Ali

    2011-01-01

    Mathematical models of many physical/statistical problems are systems of linear equations Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the . minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.

  10. A new family of stable elements for the Stokes problem based on a mixed Galerkin/least-squares finite element formulation

    NASA Technical Reports Server (NTRS)

    Franca, Leopoldo P.; Loula, Abimael F. D.; Hughes, Thomas J. R.; Miranda, Isidoro

    1989-01-01

    Adding to the classical Hellinger-Reissner formulation, a residual form of the equilibrium equation, a new Galerkin/least-squares finite element method is derived. It fits within the framework of a mixed finite element method and is stable for rather general combinations of stress and velocity interpolations, including equal-order discontinuous stress and continuous velocity interpolations which are unstable within the Galerkin approach. Error estimates are presented based on a generalization of the Babuska-Brezzi theory. Numerical results (not presented herein) have confirmed these estimates as well as the good accuracy and stability of the method.

  11. A new finite element formulation for computational fluid dynamics. IX - Fourier analysis of space-time Galerkin/least-squares algorithms

    NASA Technical Reports Server (NTRS)

    Shakib, Farzin; Hughes, Thomas J. R.

    1991-01-01

    A Fourier stability and accuracy analysis of the space-time Galerkin/least-squares method as applied to a time-dependent advective-diffusive model problem is presented. Two time discretizations are studied: a constant-in-time approximation and a linear-in-time approximation. Corresponding space-time predictor multi-corrector algorithms are also derived and studied. The behavior of the space-time algorithms is compared to algorithms based on semidiscrete formulations.

  12. Least squares collocation applied to local gravimetric solutions from satellite gravity gradiometry data

    NASA Technical Reports Server (NTRS)

    Robbins, J. W.

    1985-01-01

    An autonomous spaceborne gravity gradiometer mission is being considered as a post Geopotential Research Mission project. The introduction of satellite diometry data to geodesy is expected to improve solid earth gravity models. The possibility of utilizing gradiometer data for the determination of pertinent gravimetric quantities on a local basis is explored. The analytical technique of least squares collocation is investigated for its usefulness in local solutions of this type. It is assumed, in the error analysis, that the vertical gravity gradient component of the gradient tensor is used as the raw data signal from which the corresponding reference gradients are removed to create the centered observations required in the collocation solution. The reference gradients are computed from a high degree and order geopotential model. The solution can be made in terms of mean or point gravity anomalies, height anomalies, or other useful gravimetric quantities depending on the choice of covariance types. Selected for this study were 30 x 30 foot mean gravity and height anomalies. Existing software and new software are utilized to implement the collocation technique. It was determined that satellite gradiometry data at an altitude of 200 km can be used successfully for the determination of 30 x 30 foot mean gravity anomalies to an accuracy of 9.2 mgal from this algorithm. It is shown that the resulting accuracy estimates are sensitive to gravity model coefficient uncertainties, data reduction assumptions and satellite mission parameters.

  13. Least-squares Legendre spectral element solutions to sound propagation problems.

    PubMed

    Lin, W H

    2001-02-01

    This paper presents a novel algorithm and numerical results of sound wave propagation. The method is based on a least-squares Legendre spectral element approach for spatial discretization and the Crank-Nicolson [Proc. Cambridge Philos. Soc. 43, 50-67 (1947)] and Adams-Bashforth [D. Gottlieb and S. A. Orszag, Numerical Analysis of Spectral Methods: Theory and Applications (CBMS-NSF Monograph, Siam 1977)] schemes for temporal discretization to solve the linearized acoustic field equations for sound propagation. Two types of NASA Computational Aeroacoustics (CAA) Workshop benchmark problems [ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics, edited by J. C. Hardin, J. R. Ristorcelli, and C. K. W. Tam, NASA Conference Publication 3300, 1995a] are considered: a narrow Gaussian sound wave propagating in a one-dimensional space without flows, and the reflection of a two-dimensional acoustic pulse off a rigid wall in the presence of a uniform flow of Mach 0.5 in a semi-infinite space. The first problem was used to examine the numerical dispersion and dissipation characteristics of the proposed algorithm. The second problem was to demonstrate the capability of the algorithm in treating sound propagation in a flow. Comparisons were made of the computed results with analytical results and results obtained by other methods. It is shown that all results computed by the present method are in good agreement with the analytical solutions and results of the first problem agree very well with those predicted by other schemes. PMID:11248952

  14. Least squares solutions of the HJB equation with neural network value-function approximators.

    PubMed

    Tassa, Yuval; Erez, Tom

    2007-07-01

    In this paper, we present an empirical study of iterative least squares minimization of the Hamilton-Jacobi-Bellman (HJB) residual with a neural network (NN) approximation of the value function. Although the nonlinearities in the optimal control problem and NN approximator preclude theoretical guarantees and raise concerns of numerical instabilities, we present two simple methods for promoting convergence, the effectiveness of which is presented in a series of experiments. The first method involves the gradual increase of the horizon time scale, with a corresponding gradual increase in value function complexity. The second method involves the assumption of stochastic dynamics which introduces a regularizing second derivative term to the HJB equation. A gradual reduction of this term provides further stabilization of the convergence. We demonstrate the solution of several problems, including the 4-D inverted-pendulum system with bounded control. Our approach requires no initial stabilizing policy or any restrictive assumptions on the plant or cost function, only knowledge of the plant dynamics. In the Appendix, we provide the equations for first- and second-order differential backpropagation. PMID:17668659

  15. Non-oscillatory and non-diffusive solution of convection problems by the iteratively reweighted least-squares finite element method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan

    1993-01-01

    A comparative description is presented for the least-squares FEM (LSFEM) for 2D steady-state pure convection problems. In addition to exhibiting better control of the streamline derivative than the streamline upwinding Petrov-Galerkin method, numerical convergence rates are obtained which show the LSFEM to be virtually optimal. The LSFEM is used as a framework for an iteratively reweighted LSFEM yielding nonoscillatory and nondiffusive solutions for problems with contact discontinuities; this method is shown to convect contact discontinuities without error when using triangular and bilinear elements.

  16. Phase-space finite elements in a least-squares solution of the transport equation

    SciTech Connect

    Drumm, C.; Fan, W.; Pautz, S.

    2013-07-01

    The linear Boltzmann transport equation is solved using a least-squares finite element approximation in the space, angular and energy phase-space variables. The method is applied to both neutral particle transport and also to charged particle transport in the presence of an electric field, where the angular and energy derivative terms are handled with the energy/angular finite elements approximation, in a manner analogous to the way the spatial streaming term is handled. For multi-dimensional problems, a novel approach is used for the angular finite elements: mapping the surface of a unit sphere to a two-dimensional planar region and using a meshing tool to generate a mesh. In this manner, much of the spatial finite-elements machinery can be easily adapted to handle the angular variable. The energy variable and the angular variable for one-dimensional problems make use of edge/beam elements, also building upon the spatial finite elements capabilities. The methods described here can make use of either continuous or discontinuous finite elements in space, angle and/or energy, with the use of continuous finite elements resulting in a smaller problem size and the use of discontinuous finite elements resulting in more accurate solutions for certain types of problems. The work described in this paper makes use of continuous finite elements, so that the resulting linear system is symmetric positive definite and can be solved with a highly efficient parallel preconditioned conjugate gradients algorithm. The phase-space finite elements capability has been built into the Sceptre code and applied to several test problems, including a simple one-dimensional problem with an analytic solution available, a two-dimensional problem with an isolated source term, showing how the method essentially eliminates ray effects encountered with discrete ordinates, and a simple one-dimensional charged-particle transport problem in the presence of an electric field. (authors)

  17. Analysis of magnetic measurement data by least squares fit to series expansion solution of 3-D Laplace equation

    SciTech Connect

    Blumberg, L.N.

    1992-03-01

    The authors have analyzed simulated magnetic measurements data for the SXLS bending magnet in a plane perpendicular to the reference axis at the magnet midpoint by fitting the data to an expansion solution of the 3-dimensional Laplace equation in curvilinear coordinates as proposed by Brown and Servranckx. The method of least squares is used to evaluate the expansion coefficients and their uncertainties, and compared to results from an FFT fit of 128 simulated data points on a 12-mm radius circle about the reference axis. They find that the FFT method gives smaller coefficient uncertainties that the Least Squares method when the data are within similar areas. The Least Squares method compares more favorably when a larger number of data points are used within a rectangular area of 30-mm vertical by 60-mm horizontal--perhaps the largest area within the 35-mm x 75-mm vacuum chamber for which data could be obtained. For a grid with 0.5-mm spacing within the 30 x 60 mm area the Least Squares fit gives much smaller uncertainties than the FFT. They are therefore in the favorable position of having two methods which can determine the multipole coefficients to much better accuracy than the tolerances specified to General Dynamics. The FFT method may be preferable since it requires only one Hall probe rather than the four envisioned for the least squares grid data. However least squares can attain better accuracy with fewer probe movements. The time factor in acquiring the data will likely be the determining factor in choice of method. They should further explore least squares analysis of a Fourier expansion of data on a circle or arc of a circle since that method gives coefficient uncertainties without need for multiple independent sets of data as needed by the FFT method.

  18. Distribution of error in least-squares solution of an overdetermined system of linear simultaneous equations

    NASA Technical Reports Server (NTRS)

    Miller, C. D.

    1972-01-01

    Probability density functions were derived for errors in the evaluation of unknowns by the least squares method in system of nonhomogeneous linear equations. Coefficients of the unknowns were assumed correct and computational precision were also assumed. A vector space was used, with number of dimensions equal to the number of equations. An error vector was defined and assumed to have uniform distribution of orientation throughout the vector space. The density functions are shown to be insensitive to the biasing effects of the source of the system of equations.

  19. Numerical solution of a nonlinear least squares problem in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Landi, G.; Loli Piccolomini, E.; Nagy, J. G.

    2015-11-01

    In digital tomosynthesis imaging, multiple projections of an object are obtained along a small range of different incident angles in order to reconstruct a pseudo-3D representation (i.e., a set of 2D slices) of the object. In this paper we describe some mathematical models for polyenergetic digital breast tomosynthesis image reconstruction that explicitly takes into account various materials composing the object and the polyenergetic nature of the x-ray beam. A polyenergetic model helps to reduce beam hardening artifacts, but the disadvantage is that it requires solving a large-scale nonlinear ill-posed inverse problem. We formulate the image reconstruction process (i.e., the method to solve the ill-posed inverse problem) in a nonlinear least squares framework, and use a Levenberg-Marquardt scheme to solve it. Some implementation details are discussed, and numerical experiments are provided to illustrate the performance of the methods.

  20. Least-squares solution of incompressible Navier-Stokes equations with the p-version of finite elements

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Sonnad, Vijay

    1991-01-01

    A p-version of the least squares finite element method, based on the velocity-pressure-vorticity formulation, is developed for solving steady state incompressible viscous flow problems. The resulting system of symmetric and positive definite linear equations can be solved satisfactorily with the conjugate gradient method. In conjunction with the use of rapid operator application which avoids the formation of either element of global matrices, it is possible to achieve a highly compact and efficient solution scheme for the incompressible Navier-Stokes equations. Numerical results are presented for two-dimensional flow over a backward facing step. The effectiveness of simple outflow boundary conditions is also demonstrated.

  1. Bayesian least squares deconvolution

    NASA Astrophysics Data System (ADS)

    Asensio Ramos, A.; Petit, P.

    2015-11-01

    Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  2. Multilevel first-order system least squares for PDEs

    SciTech Connect

    McCormick, S.

    1994-12-31

    The purpose of this talk is to analyze the least-squares finite element method for second-order convection-diffusion equations written as a first-order system. In general, standard Galerkin finite element methods applied to non-self-adjoint elliptic equations with significant convection terms exhibit a variety of deficiencies, including oscillations or nonmonotonicity of the solution and poor approximation of its derivatives, A variety of stabilization techniques, such as up-winding, Petrov-Galerkin, and stream-line diffusion approximations, have been introduced to eliminate these and other drawbacks of standard Galerkin methods. Yet, although significant progress has been made, convection-diffusion problems remain among the more difficult problems to solve numerically. The first-order system least-squares approach promises to overcome these deficiencies. This talk develops ellipticity estimates and discretization error bounds for elliptic equations (with lower order terms) that are reformulated as a least-squares problem for an equivalent first-order system. The main results are the proofs of ellipticity and optimal convergence of multiplicative and additive solvers of the discrete systems.

  3. Weighted Least Squares Fitting Using Ordinary Least Squares Algorithms.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.

    1997-01-01

    A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. The approach consists of iteratively performing steps of existing algorithms for ordinary least squares fitting of the same model and is based on maximizing a function that majorizes WLS loss function. (Author/SLD)

  4. AKLSQF - LEAST SQUARES CURVE FITTING

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

  5. On the Numerical Solution of the Elliptic Monge—Ampère Equation in Dimension Two: A Least-Squares Approach

    NASA Astrophysics Data System (ADS)

    Dean, Edward J.; Glowinski, Roland

    During his outstanding career, Olivier Pironneau has addressed the solution of a large variety of problems from the Natural Sciences, Engineering and Finance to name a few, an evidence of his activity being the many articles and books he has written. It is the opinion of these authors, and former collaborators of O. Pironneau (cf. [DGP91]), that this chapter is well-suited to a volume honoring him. Indeed, the two pillars of the solution methodology that we are going to describe are: (1) a nonlinear least squares formulation in an appropriate Hilbert space, and (2) a mixed finite element approximation, reminiscent of the one used in [DGP91] and [GP79] for solving the Stokes and Navier-Stokes equations in their stream function-vorticity formulation; the contributions of O. Pironneau on the two above topics are well-known world wide. Last but not least, we will show that the solution method discussed here can be viewed as a solution method for a non-standard variant of the incompressible Navier-Stokes equations, an area where O. Pironneau has many outstanding and celebrated contributions (cf. [Pir89], for example).

  6. Iterative methods for weighted least-squares

    SciTech Connect

    Bobrovnikova, E.Y.; Vavasis, S.A.

    1996-12-31

    A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.

  7. 2-D weighted least-squares phase unwrapping

    DOEpatents

    Ghiglia, Dennis C.; Romero, Louis A.

    1995-01-01

    Weighted values of interferometric signals are unwrapped by determining the least squares solution of phase unwrapping for unweighted values of the interferometric signals; and then determining the least squares solution of phase unwrapping for weighted values of the interferometric signals by preconditioned conjugate gradient methods using the unweighted solutions as preconditioning values. An output is provided that is representative of the least squares solution of phase unwrapping for weighted values of the interferometric signals.

  8. 2-D weighted least-squares phase unwrapping

    DOEpatents

    Ghiglia, D.C.; Romero, L.A.

    1995-06-13

    Weighted values of interferometric signals are unwrapped by determining the least squares solution of phase unwrapping for unweighted values of the interferometric signals; and then determining the least squares solution of phase unwrapping for weighted values of the interferometric signals by preconditioned conjugate gradient methods using the unweighted solutions as preconditioning values. An output is provided that is representative of the least squares solution of phase unwrapping for weighted values of the interferometric signals. 6 figs.

  9. Deming's General Least Square Fitting

    Energy Science and Technology Software Center (ESTSC)

    1992-02-18

    DEM4-26 is a generalized least square fitting program based on Deming''s method. Functions built into the program for fitting include linear, quadratic, cubic, power, Howard''s, exponential, and Gaussian; others can easily be added. The program has the following capabilities: (1) entry, editing, and saving of data; (2) fitting of any of the built-in functions or of a user-supplied function; (3) plotting the data and fitted function on the display screen, with error limits if requested,more » and with the option of copying the plot to the printer; (4) interpolation of x or y values from the fitted curve with error estimates based on error limits selected by the user; and (5) plotting the residuals between the y data values and the fitted curve, with the option of copying the plot to the printer. If the plot is to be copied to a printer, GRAPHICS should be called from the operating system disk before the BASIC interpreter is loaded.« less

  10. Nonlinear least squares and regularization

    SciTech Connect

    Berryman, J.G.

    1996-04-01

    A problem frequently encountered in the earth sciences requires deducing physical parameters of the system of interest from measurements of some other (hopefully) closely related physical quantity. The obvious example in seismology (either surface reflection seismology or crosswell seismic tomography) is the use of measurements of sound wave traveltime to deduce wavespeed distribution in the earth and then subsequently to infer the values of other physical quantities of interest such as porosity, water or oil saturation, permeability, etc. The author presents and discusses some general ideas about iterative nonlinear output least-squares methods. The main result is that, if it is possible to do forward modeling on a physical problem in a way that permits the output (i.e., the predicted values of some physical parameter that could be measured) and the first derivative of the same output with respect to the model parameters (whatever they may be) to be calculated numerically, then it is possible (at least in principle) to solve the inverse problem using the method described. The main trick learned in this analysis comes from the realization that the steps in the model updates may have to be quite small in some cases for the implied guarantees of convergence to be realized.

  11. The moving-least-squares-particle hydrodynamics method (MLSPH)

    SciTech Connect

    Dilts, G.

    1997-12-31

    An enhancement of the smooth-particle hydrodynamics (SPH) method has been developed using the moving-least-squares (MLS) interpolants of Lancaster and Salkauskas which simultaneously relieves the method of several well-known undesirable behaviors, including spurious boundary effects, inaccurate strain and rotation rates, pressure spikes at impact boundaries, and the infamous tension instability. The classical SPH method is derived in a novel manner by means of a Galerkin approximation applied to the Lagrangian equations of motion for continua using as basis functions the SPH kernel function multiplied by the particle volume. This derivation is then modified by simply substituting the MLS interpolants for the SPH Galerkin basis, taking care to redefine the particle volume and mass appropriately. The familiar SPH kernel approximation is now equivalent to a colocation-Galerkin method. Both classical conservative and recent non-conservative formulations of SPH can be derived and emulated. The non-conservative forms can be made conservative by adding terms that are zero within the approximation at the expense of boundary-value considerations. The familiar Monaghan viscosity is used. Test calculations of uniformly expanding fluids, the Swegle example, spinning solid disks, impacting bars, and spherically symmetric flow illustrate the superiority of the technique over SPH. In all cases it is seen that the marvelous ability of the MLS interpolants to add up correctly everywhere civilizes the noisy, unpredictable nature of SPH. Being a relatively minor perturbation of the SPH method, it is easily retrofitted into existing SPH codes. On the down side, computational expense at this point is significant, the Monaghan viscosity undoes the contribution of the MLS interpolants, and one-point quadrature (colocation) is not accurate enough. Solutions to these difficulties are being pursued vigorously.

  12. Constrained least squares estimation incorporating wavefront sensing

    NASA Astrophysics Data System (ADS)

    Ford, Stephen D.; Welsh, Byron M.; Roggemann, Michael C.

    1998-11-01

    We address the optimal processing of astronomical images using the deconvolution from wave-front sensing technique (DWFS). A constrained least-squares (CLS) solution which incorporates ensemble-averaged DWFS data is derived using Lagrange minimization. The new estimator requires DWFS data, noise statistics, optical transfer function statistics, and a constraint. The constraint can be chosen such that the algorithm selects a conventional regularization constant automatically. No ad hoc parameter tuning is necessary. The algorithm uses an iterative Newton-Raphson minimization to determine the optimal Lagrange multiplier. Computer simulation of a 1m telescope imaging through atmospheric turbulence is used to test the estimation scheme. CLS object estimates are compared with the corresponding long exposure images. The CLS algorithm provides images with superior resolution and is computationally inexpensive, converging to a solution in less than 10 iterations.

  13. A least-squares method for second order noncoercive elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Ku, Jaeun

    2007-03-01

    In this paper, we consider a least-squares method proposed by Bramble, Lazarov and Pasciak (1998) which can be thought of as a stabilized Galerkin method for noncoercive problems with unique solutions. We modify their method by weakening the strength of the stabilization terms and present various new error estimates. The modified method has all the desirable properties of the original method; indeed, we shall show some theoretical properties that are not known for the original method. At the same time, our numerical experiments show an improvement of the method due to the modification.

  14. Using Least Squares for Error Propagation

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2015-01-01

    The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…

  15. Collinearity in Least-Squares Analysis

    ERIC Educational Resources Information Center

    de Levie, Robert

    2012-01-01

    How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…

  16. Weighted conditional least-squares estimation

    SciTech Connect

    Booth, J.G.

    1987-01-01

    A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered.

  17. FRACVAL: Validation (nonlinear least squares method) of the solution of one-dimensional transport of decaying species in a discrete planar fracture with rock matrix diffusion

    SciTech Connect

    Gureghian, A.B.

    1990-08-01

    Analytical solutions based on the Laplace transforms are presented for the one-dimensional, transient, advective-dispersive transport of a reacting radionuclide through a discrete planar fracture with constant aperture subject to diffusion in the surrounding rock matrix where both regions of solute migration display residual concentrations. The dispersion-free solutions, which are of closed form, are also reported. The solution assumes that the ground-water flow regime is under steady-state and isothermal conditions and that the rock matrix is homogeneous, isotropic, and saturated with stagnant water. The verification of the solution was performed by means of related analytical solutions dealing with particular aspects of the transport problem under investigation on the one hand, and a numerical solution capable of handling the complete problem on the other. The integrals encountered in the general solution are evaluated by means of a composite Gauss-Legendre quadrature scheme. 9 refs., 8 figs., 32 tabs.

  18. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  19. A spectral mimetic least-squares method

    DOE PAGESBeta

    Bochev, Pavel; Gerritsma, Marc

    2014-09-01

    We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are alsomore » satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.« less

  20. A spectral mimetic least-squares method

    SciTech Connect

    Bochev, Pavel; Gerritsma, Marc

    2014-09-01

    We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are also satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.

  1. Theoretical study of the incompressible Navier-Stokes equations by the least-squares method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Loh, Ching Y.; Povinelli, Louis A.

    1994-01-01

    Usually the theoretical analysis of the Navier-Stokes equations is conducted via the Galerkin method which leads to difficult saddle-point problems. This paper demonstrates that the least-squares method is a useful alternative tool for the theoretical study of partial differential equations since it leads to minimization problems which can often be treated by an elementary technique. The principal part of the Navier-Stokes equations in the first-order velocity-pressure-vorticity formulation consists of two div-curl systems, so the three-dimensional div-curl system is thoroughly studied at first. By introducing a dummy variable and by using the least-squares method, this paper shows that the div-curl system is properly determined and elliptic, and has a unique solution. The same technique then is employed to prove that the Stokes equations are properly determined and elliptic, and that four boundary conditions on a fixed boundary are required for three-dimensional problems. This paper also shows that under four combinations of non-standard boundary conditions the solution of the Stokes equations is unique. This paper emphasizes the application of the least-squares method and the div-curl method to derive a high-order version of differential equations and additional boundary conditions. In this paper, an elementary method (integration by parts) is used to prove Friedrichs' inequalities related to the div and curl operators which play an essential role in the analysis.

  2. The least square optimization in image mosaic

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-dong; Yang, Yong-yue

    2015-02-01

    Image registration has been a hot research spot in the computer vision technology and image processing. Image registration is one of the key technologies in image mosaic. In order to improve the accuracy of matching feature points, this paper put forward the least square optimization in image mosaic based on the algorithm of matching similarity of matrices. The correlation coefficient method of matrix is used for matching the module points in the overlap region of images and calculating the error between matrices. The error of feature points can be further minimized by using the method of least square optimization. Finally, image mosaic can be achieved by the two pair of feature points with minimized residual sum of squares. The experimental results demonstrate that the least square optimization in image mosaic can mosaic images with overlap region and improve the accuracy of matching feature points.

  3. Regularized total least squares approach for nonconvolutional linear inverse problems.

    PubMed

    Zhu, W; Wang, Y; Galatsanos, N P; Zhang, J

    1999-01-01

    In this correspondence, a solution is developed for the regularized total least squares (RTLS) estimate in linear inverse problems where the linear operator is nonconvolutional. Our approach is based on a Rayleigh quotient (RQ) formulation of the TLS problem, and we accomplish regularization by modifying the RQ function to enforce a smooth solution. A conjugate gradient algorithm is used to minimize the modified RQ function. As an example, the proposed approach has been applied to the perturbation equation encountered in optical tomography. Simulation results show that this method provides more stable and accurate solutions than the regularized least squares and a previously reported total least squares approach, also based on the RQ formulation. PMID:18267442

  4. On Least Squares Fitting Nonlinear Submodels.

    ERIC Educational Resources Information Center

    Bechtel, Gordon G.

    Three simplifying conditions are given for obtaining least squares (LS) estimates for a nonlinear submodel of a linear model. If these are satisfied, and if the subset of nonlinear parameters may be LS fit to the corresponding LS estimates of the linear model, then one attains the desired LS estimates for the entire submodel. Two illustrative…

  5. Kriging and its relation to least squares

    SciTech Connect

    Oden, N.

    1984-11-01

    Kriging is a technique for producing contour maps that, under certain conditions, are optimal in a mean squared error sense. The relation of Kriging to Least Squares is reviewed here. New methods for analyzing residuals are suggsted, ML estimators inspected, and an expression derived for calculating cross-validation error. An example using ground water data is provided.

  6. Factor Analysis by Generalized Least Squares.

    ERIC Educational Resources Information Center

    Joreskog, Karl G.; Goldberger, Arthur S.

    Aitkin's generalized least squares (GLS) principle, with the inverse of the observed variance-covariance matrix as a weight matrix, is applied to estimate the factor analysis model in the exploratory (unrestricted) case. It is shown that the GLS estimates are scale free and asymptotically efficient. The estimates are computed by a rapidly…

  7. Least squares estimation of avian molt rates

    USGS Publications Warehouse

    Johnson, D.H.

    1989-01-01

    A straightforward least squares method of estimating the rate at which birds molt feathers is presented, suitable for birds captured more than once during the period of molt. The date of molt onset can also be estimated. The method is applied to male and female mourning doves.

  8. Partial least squares for dependent data

    PubMed Central

    Singer, Marco; Krivobokova, Tatyana; Munk, Axel; de Groot, Bert

    2016-01-01

    We consider the partial least squares algorithm for dependent data and study the consequences of ignoring the dependence both theoretically and numerically. Ignoring nonstationary dependence structures can lead to inconsistent estimation, but a simple modification yields consistent estimation. A protein dynamics example illustrates the superior predictive power of the proposed method. PMID:27279662

  9. BLS: Box-fitting Least Squares

    NASA Astrophysics Data System (ADS)

    Kovács, G.; Zucker, S.; Mazeh, T.

    2016-07-01

    BLS (Box-fitting Least Squares) is a box-fitting algorithm that analyzes stellar photometric time series to search for periodic transits of extrasolar planets. It searches for signals characterized by a periodic alternation between two discrete levels, with much less time spent at the lower level.

  10. A Least-Squares Transport Equation Compatible with Voids

    SciTech Connect

    Hansen, Jon; Peterson, Jacob; Morel, Jim; Ragusa, Jean; Wang, Yaqi

    2014-12-01

    Standard second-order self-adjoint forms of the transport equation, such as the even-parity, odd-parity, and self-adjoint angular flux equation, cannot be used in voids. Perhaps more important, they experience numerical convergence difficulties in near-voids. Here we present a new form of a second-order self-adjoint transport equation that has an advantage relative to standard forms in that it can be used in voids or near-voids. Our equation is closely related to the standard least-squares form of the transport equation with both equations being applicable in a void and having a nonconservative analytic form. However, unlike the standard least-squares form of the transport equation, our least-squares equation is compatible with source iteration. It has been found that the standard least-squares form of the transport equation with a linear-continuous finite-element spatial discretization has difficulty in the thick diffusion limit. Here we extensively test the 1D slab-geometry version of our scheme with respect to void solutions, spatial convergence rate, and the intermediate and thick diffusion limits. We also define an effective diffusion synthetic acceleration scheme for our discretization. Our conclusion is that our least-squares Sn formulation represents an excellent alternative to existing second-order Sn transport formulations

  11. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2002-01-01

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  12. Least squares restoration of multichannel images

    NASA Technical Reports Server (NTRS)

    Galatsanos, Nikolas P.; Katsaggelos, Aggelos K.; Chin, Roland T.; Hillery, Allen D.

    1991-01-01

    Multichannel restoration using both within- and between-channel deterministic information is considered. A multichannel image is a set of image planes that exhibit cross-plane similarity. Existing optimal restoration filters for single-plane images yield suboptimal results when applied to multichannel images, since between-channel information is not utilized. Multichannel least squares restoration filters are developed using the set theoretic and the constrained optimization approaches. A geometric interpretation of the estimates of both filters is given. Color images (three-channel imagery with red, green, and blue components) are considered. Constraints that capture the within- and between-channel properties of color images are developed. Issues associated with the computation of the two estimates are addressed. A spatially adaptive, multichannel least squares filter that utilizes local within- and between-channel image properties is proposed. Experiments using color images are described.

  13. Domain Decomposition Algorithms for First-Order System Least Squares Methods

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

  14. Least-squares wave-equation migration/inversion

    NASA Astrophysics Data System (ADS)

    Kuehl, Henning

    This thesis presents an acoustic migration/inversion algorithm that inverts seismic reflection data for the angle dependent subsurface reflectivity by means of least-squares minimization. The method is based on the primary seismic data representation (single scattering approximation) and utilizes one-way wavefield propagators ('wave-equation operators') to compute the Green's functions of the problem. The Green's functions link the measured reflection seismic data to the image points in the earth's interior where an angle dependent imaging condition probes the image point's angular spectrum in depth. The proposed least-squares wave-equation migration minimizes a weighted seismic data misfit function complemented with a model space regularization term. The regularization penalizes discontinuities and rapid amplitude changes in the reflection angle dependent common image gathers---the model space of the inverse problem. 'Roughness' with respect to angle dependence is attributed to seismic data errors (e.g., incomplete and irregular wavefield sampling) which adversely affect the amplitude fidelity of the common image gathers. The least-squares algorithm fits the seismic data taking their variance into account, and, at the same time, imposes some degree of smoothness on the solution. The model space regularization increases amplitude robustness considerably. It mitigates kinematic imaging artifacts and noise while preserving the data consistent smooth angle dependence of the seismic amplitudes. In least-squares migration the seismic modelling operator and the migration operator---the adjoint of modelling---are applied iteratively to minimize the regularized objective function. Whilst least-squares migration/inversion is computationally expensive synthetic data tests show that usually a few iterations suffice for its benefits to take effect. An example from the Gulf of Mexico illustrates the application of least-squares wave-equation migration/inversion to a real

  15. Augmented classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2004-02-03

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  16. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  17. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  18. Total least squares for anomalous change detection

    SciTech Connect

    Theiler, James P; Matsekh, Anna M

    2010-01-01

    A family of difference-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQ-based anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and furthermore it is shown to be equivalent to the optimized covariance equalization algorithm. What whitened TLSQ offers, in addition to connecting with a common language the derivations of two of the most popular anomalous change detection algorithms - chronochrome and covariance equalization - is a generalization of these algorithms with the potential for better performance.

  19. Classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.

    2002-01-01

    An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.

  20. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2004-03-23

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  1. Vehicle detection using partial least squares.

    PubMed

    Kembhavi, Aniruddha; Harwood, David; Davis, Larry S

    2011-06-01

    Detecting vehicles in aerial images has a wide range of applications, from urban planning to visual surveillance. We describe a vehicle detector that improves upon previous approaches by incorporating a very large and rich set of image descriptors. A new feature set called Color Probability Maps is used to capture the color statistics of vehicles and their surroundings, along with the Histograms of Oriented Gradients feature and a simple yet powerful image descriptor that captures the structural characteristics of objects named Pairs of Pixels. The combination of these features leads to an extremely high-dimensional feature set (approximately 70,000 elements). Partial Least Squares is first used to project the data onto a much lower dimensional sub-space. Then, a powerful feature selection analysis is employed to improve the performance while vastly reducing the number of features that must be calculated. We compare our system to previous approaches on two challenging data sets and show superior performance. PMID:20921579

  2. Flexible least squares for approximately linear systems

    NASA Astrophysics Data System (ADS)

    Kalaba, Robert; Tesfatsion, Leigh

    1990-10-01

    A probability-free multicriteria approach is presented to the problem of filtering and smoothing when prior beliefs concerning dynamics and measurements take an approximately linear form. Consideration is given to applications in the social and biological sciences, where obtaining agreement among researchers regarding probability relations for discrepancy terms is difficult. The essence of the proposed flexible-least-squares (FLS) procedure is the cost-efficient frontier, a curve in a two-dimensional cost plane which provides an explicit and systematic way to determine the efficient trade-offs between the separate costs incurred for dynamic and measurement specification errors. The FLS estimates show how the state vector could have evolved over time in a manner minimally incompatible with the prior dynamic and measurement specifications. A FORTRAN program for implementing the FLS filtering and smoothing procedure for approximately linear systems is provided.

  3. Tensor hypercontraction. II. Least-squares renormalization.

    PubMed

    Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David

    2012-12-14

    The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)]. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1∕r(12) operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N(5)) effort if exact integrals are decomposed, or O(N(4)) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N(4)) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry. PMID:23248986

  4. Tensor hypercontraction. II. Least-squares renormalization

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David

    2012-12-01

    The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)], 10.1063/1.4732310. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1/r12 operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N^5) effort if exact integrals are decomposed, or O(N^4) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N^4) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry.

  5. A new least-squares transport equation compatible with voids

    SciTech Connect

    Hansen, J. B.; Morel, J. E.

    2013-07-01

    We define a new least-squares transport equation that is applicable in voids, can be solved using source iteration with diffusion-synthetic acceleration, and requires only the solution of an independent set of second-order self-adjoint equations for each direction during each source iteration. We derive the equation, discretize it using the S{sub n} method in conjunction with a linear-continuous finite-element method in space, and computationally demonstrate various of its properties. (authors)

  6. Multisplitting for linear, least squares and nonlinear problems

    SciTech Connect

    Renaut, R.

    1996-12-31

    In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.

  7. Solving linear inequalities in a least squares sense

    SciTech Connect

    Bramley, R.; Winnicka, B.

    1994-12-31

    Let A {element_of} {Re}{sup mxn} be an arbitrary real matrix, and let b {element_of} {Re}{sup m} a given vector. A familiar problem in computational linear algebra is to solve the system Ax = b in a least squares sense; that is, to find an x* minimizing {parallel}Ax {minus} b{parallel}, where {parallel} {center_dot} {parallel} refers to the vector two-norm. Such an x* solves the normal equations A{sup T}(Ax {minus} b) = 0, and the optimal residual r* = b {minus} Ax* is unique (although x* need not be). The least squares problem is usually interpreted as corresponding to multiple observations, represented by the rows of A and b, on a vector of data x. The observations may be inconsistent, and in this case a solution is sought that minimizes the norm of the residuals. A less familiar problem to numerical linear algebraists is the solution of systems of linear inequalities Ax {le} b in a least squares sense, but the motivation is similar: if a set of observations places upper or lower bounds on linear combinations of variables, the authors want to find x* minimizing {parallel} (Ax {minus} b){sub +} {parallel}, where the i{sup th} component of the vector v{sub +} is the maximum of zero and the i{sup th} component of v.

  8. Recursive total-least-squares adaptive filtering

    NASA Astrophysics Data System (ADS)

    Dowling, Eric M.; DeGroat, Ronald D.

    1991-12-01

    In this paper a recursive total least squares (RTLS) adaptive filter is introduced and studied. The TLS approach is more appropriate and provides more accurate results than the LS approach when there is error on both sides of the adaptive filter equation; for example, linear prediction, AR modeling, and direction finding. The RTLS filter weights are updated in time O(mr) where m is the filter order and r is the dimension of the tracked subspace. In conventional adaptive filtering problems, r equals 1, so that updates can be performed with complexity O(m). The updates are performed by tracking an orthonormal basis for the smaller of the signal or noise subspaces using a computationally efficient subspace tracking algorithm. The filter is shown to outperform both LMS and RLS in terms of tracking and steady state tap weight error norms. It is also more versatile in that it can adapt its weight in the absence of persistent excitation, i.e., when the input data correlation matrix is near rank deficient. Through simulation, the convergence and tracking properties of the filter are presented and compared with LMS and RLS.

  9. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  10. Robust inverse kinematics using damped least squares with dynamic weighting

    NASA Technical Reports Server (NTRS)

    Schinstock, D. E.; Faddis, T. N.; Greenway, R. B.

    1994-01-01

    This paper presents a general method for calculating the inverse kinematics with singularity and joint limit robustness for both redundant and non-redundant serial-link manipulators. Damped least squares inverse of the Jacobian is used with dynamic weighting matrices in approximating the solution. This reduces specific joint differential vectors. The algorithm gives an exact solution away from the singularities and joint limits, and an approximate solution at or near the singularities and/or joint limits. The procedure is here implemented for a six d.o.f. teleoperator and a well behaved slave manipulator resulted under teleoperational control.

  11. Recursive least-squares learning algorithms for neural networks

    SciTech Connect

    Lewis, P.S. ); Hwang, Jenq-Neng . Dept. of Electrical Engineering)

    1990-01-01

    This paper presents the development of a pair of recursive least squares (RLS) algorithms for online training of multilayer perceptrons, which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation, either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is in the order of (N{sup 2}), where N is the number of network parameters. This is due to the estimation of the N {times} N inverse Hessian matrix. Less computationally intensive approximations of the RLS algorithms can be easily derived by using only block diagonal elements of this matrix, thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example, RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6331). 14 refs., 3 figs.

  12. Least-squares framework for projection MRI reconstruction

    NASA Astrophysics Data System (ADS)

    Gregor, Jens; Rannou, Fernando

    2001-07-01

    Magnetic resonance signals that have very short relaxation times are conveniently sampled in a spherical fashion. We derive a least squares framework for reconstructing three-dimensional source distribution images from such data. Using a finite-series approach, the image is represented as a weighted sum of translated Kaiser-Bessel window functions. The Radon transform thereof establishes the connection with the projection data that one can obtain from the radial sampling trajectories. The resulting linear system of equations is sparse, but quite large. To reduce the size of the problem, we introduce focus of attention. Based on the theory of support functions, this data-driven preprocessing scheme eliminates equations and unknowns that merely represent the background. The image reconstruction and the focus of attention both require a least squares solution to be computed. We describe a projected gradient approach that facilitates a non-negativity constrained version of the powerful LSQR algorithm. In order to ensure reasonable execution times, the least squares computation can be distributed across a network of PCs and/or workstations. We discuss how to effectively parallelize the NN-LSQR algorithm. We close by presenting results from experimental work that addresses both computational issues and image quality using a mathematical phantom.

  13. Solution of a few nonlinear problems in aerodynamics by the finite elements and functional least squares methods. Ph.D. Thesis - Paris Univ.; [mathematical models of transonic flow using nonlinear equations

    NASA Technical Reports Server (NTRS)

    Periaux, J.

    1979-01-01

    The numerical simulation of the transonic flows of idealized fluids and of incompressible viscous fluids, by the nonlinear least squares methods is presented. The nonlinear equations, the boundary conditions, and the various constraints controlling the two types of flow are described. The standard iterative methods for solving a quasi elliptical nonlinear equation with partial derivatives are reviewed with emphasis placed on two examples: the fixed point method applied to the Gelder functional in the case of compressible subsonic flows and the Newton method used in the technique of decomposition of the lifting potential. The new abstract least squares method is discussed. It consists of substituting the nonlinear equation by a problem of minimization in a H to the minus 1 type Sobolev functional space.

  14. Götterdämmerung over total least squares

    NASA Astrophysics Data System (ADS)

    Malissiovas, G.; Neitzel, F.; Petrovic, S.

    2016-06-01

    The traditional way of solving non-linear least squares (LS) problems in Geodesy includes a linearization of the functional model and iterative solution of a nonlinear equation system. Direct solutions for a class of nonlinear adjustment problems have been presented by the mathematical community since the 1980s, based on total least squares (TLS) algorithms and involving the use of singular value decomposition (SVD). However, direct LS solutions for this class of problems have been developed in the past also by geodesists. In this contributionwe attempt to establish a systematic approach for direct solutions of non-linear LS problems from a "geodetic" point of view. Therefore, four non-linear adjustment problems are investigated: the fit of a straight line to given points in 2D and in 3D, the fit of a plane in 3D and the 2D symmetric similarity transformation of coordinates. For all these problems a direct LS solution is derived using the same methodology by transforming the problem to the solution of a quadratic or cubic algebraic equation. Furthermore, by applying TLS all these four problems can be transformed to solving the respective characteristic eigenvalue equations. It is demonstrated that the algebraic equations obtained in this way are identical with those resulting from the LS approach. As a by-product of this research two novel approaches are presented for the TLS solutions of fitting a straight line to 3D and the 2D similarity transformation of coordinates. The derived direct solutions of the four considered problems are illustrated on examples from the literature and also numerically compared to published iterative solutions.

  15. Kernel-based least squares policy iteration for reinforcement learning.

    PubMed

    Xu, Xin; Hu, Dewen; Lu, Xicheng

    2007-07-01

    In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating

  16. General spline filters for discontinuous Galerkin solutions

    PubMed Central

    Peters, Jörg

    2015-01-01

    The discontinuous Galerkin (dG) method outputs a sequence of polynomial pieces. Post-processing the sequence by Smoothness-Increasing Accuracy-Conserving (SIAC) convolution not only increases the smoothness of the sequence but can also improve its accuracy and yield superconvergence. SIAC convolution is considered optimal if the SIAC kernels, in the form of a linear combination of B-splines of degree d, reproduce polynomials of degree 2d. This paper derives simple formulas for computing the optimal SIAC spline coefficients for the general case including non-uniform knots. PMID:26594090

  17. Single Object Tracking With Fuzzy Least Squares Support Vector Machine.

    PubMed

    Zhang, Shunli; Zhao, Sicong; Sui, Yao; Zhang, Li

    2015-12-01

    Single object tracking, in which a target is often initialized manually in the first frame and then is tracked and located automatically in the subsequent frames, is a hot topic in computer vision. The traditional tracking-by-detection framework, which often formulates tracking as a binary classification problem, has been widely applied and achieved great success in single object tracking. However, there are some potential issues in this formulation. For instance, the boundary between the positive and negative training samples is fuzzy, and the objectives of tracking and classification are inconsistent. In this paper, we attempt to address the above issues from the fuzzy system perspective and propose a novel tracking method by formulating tracking as a fuzzy classification problem. First, we introduce the fuzzy strategy into tracking and propose a novel fuzzy tracking framework, which can measure the importance of the training samples by assigning different memberships to them and offer more strict spatial constraints. Second, we develop a fuzzy least squares support vector machine (FLS-SVM) approach and employ it to implement a concrete tracker. In particular, the primal form, dual form, and kernel form of FLS-SVM are analyzed and the corresponding closed-form solutions are derived for efficient realizations. Besides, a least squares regression model is built to control the update adaptively, retaining the robustness of the appearance model. The experimental results demonstrate that our method can achieve comparable or superior performance to many state-of-the-art methods. PMID:26441419

  18. Least-squares methods involving the H{sup -1} inner product

    SciTech Connect

    Pasciak, J.

    1996-12-31

    Least-squares methods are being shown to be an effective technique for the solution of elliptic boundary value problems. However, the methods differ depending on the norms in which they are formulated. For certain problems, it is much more natural to consider least-squares functionals involving the H{sup -1} norm. Such norms give rise to improved convergence estimates and better approximation to problems with low regularity solutions. In addition, fewer new variables need to be added and less stringent boundary conditions need to be imposed. In this talk, I will describe some recent developments involving least-squares methods utilizing the H{sup -1} inner product.

  19. Evaluation of fuzzy inference systems using fuzzy least squares

    NASA Technical Reports Server (NTRS)

    Barone, Joseph M.

    1992-01-01

    Efforts to develop evaluation methods for fuzzy inference systems which are not based on crisp, quantitative data or processes (i.e., where the phenomenon the system is built to describe or control is inherently fuzzy) are just beginning. This paper suggests that the method of fuzzy least squares can be used to perform such evaluations. Regressing the desired outputs onto the inferred outputs can provide both global and local measures of success. The global measures have some value in an absolute sense, but they are particularly useful when competing solutions (e.g., different numbers of rules, different fuzzy input partitions) are being compared. The local measure described here can be used to identify specific areas of poor fit where special measures (e.g., the use of emphatic or suppressive rules) can be applied. Several examples are discussed which illustrate the applicability of the method as an evaluation tool.

  20. Estimating parameter of influenza transmission using regularized least square

    NASA Astrophysics Data System (ADS)

    Nuraini, N.; Syukriah, Y.; Indratno, S. W.

    2014-02-01

    Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.

  1. Orthogonalizing EM: A design-based least squares algorithm

    PubMed Central

    Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z. G.

    2016-01-01

    We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p. Supplementary materials for this article are available online. PMID:27499558

  2. Using Weighted Least Squares Regression for Obtaining Langmuir Sorption Constants

    Technology Transfer Automated Retrieval System (TEKTRAN)

    One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...

  3. Multilevel solvers of first-order system least-squares for Stokes equations

    SciTech Connect

    Lai, Chen-Yao G.

    1996-12-31

    Recently, The use of first-order system least squares principle for the approximate solution of Stokes problems has been extensively studied by Cai, Manteuffel, and McCormick. In this paper, we study multilevel solvers of first-order system least-squares method for the generalized Stokes equations based on the velocity-vorticity-pressure formulation in three dimensions. The least-squares functionals is defined to be the sum of the L{sup 2}-norms of the residuals, which is weighted appropriately by the Reynolds number. We develop convergence analysis for additive and multiplicative multilevel methods applied to the resulting discrete equations.

  4. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  5. A least squares closure approximation for liquid crystalline polymers

    NASA Astrophysics Data System (ADS)

    Sievenpiper, Traci Ann

    2011-12-01

    An introduction to existing closure schemes for the Doi-Hess kinetic theory of liquid crystalline polymers is provided. A new closure scheme is devised based on a least squares fit of a linear combination of the Doi, Tsuji-Rey, Hinch-Leal I, and Hinch-Leal II closure schemes. The orientation tensor and rate-of-strain tensor are fit separately using data generated from the kinetic solution of the Smoluchowski equation. The known behavior of the kinetic solution and existing closure schemes at equilibrium is compared with that of the new closure scheme. The performance of the proposed closure scheme in simple shear flow for a variety of shear rates and nematic polymer concentrations is examined, along with that of the four selected existing closure schemes. The flow phase diagram for the proposed closure scheme under the conditions of shear flow is constructed and compared with that of the kinetic solution. The study of the closure scheme is extended to the simulation of nematic polymers in plane Couette cells. The results are compared with existing kinetic simulations for a Landau-deGennes mesoscopic model with the application of a parameterized closure approximation. The proposed closure scheme is shown to produce a reasonable approximation to the kinetic results in the case of simple shear flow and plane Couette flow.

  6. Iterative least-squares solvers for the Navier-Stokes equations

    SciTech Connect

    Bochev, P.

    1996-12-31

    In the recent years finite element methods of least-squares type have attracted considerable attention from both mathematicians and engineers. This interest has been motivated, to a large extent, by several valuable analytic and computational properties of least-squares variational principles. In particular, finite element methods based on such principles circumvent Ladyzhenskaya-Babuska-Brezzi condition and lead to symmetric and positive definite algebraic systems. Thus, it is not surprising that numerical solution of fluid flow problems has been among the most promising and successful applications of least-squares methods. In this context least-squares methods offer significant theoretical and practical advantages in the algorithmic design, which makes resulting methods suitable, among other things, for large-scale numerical simulations.

  7. PRINCIPAL COMPONENTS ANALYSIS AND PARTIAL LEAST SQUARES REGRESSION

    EPA Science Inventory

    The mathematics behind the techniques of principal component analysis and partial least squares regression is presented in detail, starting from the appropriate extreme conditions. he meaning of the resultant vectors and many of their mathematical interrelationships are also pres...

  8. Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2012-01-01

    A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed

  9. Iterative least squares method for global positioning system

    NASA Astrophysics Data System (ADS)

    He, Y.; Bilgic, A.

    2011-08-01

    The efficient implementation of positioning algorithms is investigated for Global Positioning System (GPS). In order to do the positioning, the pseudoranges between the receiver and the satellites are required. The most commonly used algorithm for position computation from pseudoranges is non-linear Least Squares (LS) method. Linearization is done to convert the non-linear system of equations into an iterative procedure, which requires the solution of a linear system of equations in each iteration, i.e. linear LS method is applied iteratively. CORDIC-based approximate rotations are used while computing the QR decomposition for solving the LS problem in each iteration. By choosing accuracy of the approximation, e.g. with a chosen number of optimal CORDIC angles per rotation, the LS computation can be simplified. The accuracy of the positioning results is compared for various numbers of required iterations and various approximation accuracies using real GPS data. The results show that very coarse approximations are sufficient for reasonable positioning accuracy. Therefore, the presented method reduces the computational complexity significantly and is highly suited for hardware implementation.

  10. A note on the limitations of lattice least squares

    NASA Technical Reports Server (NTRS)

    Gillis, J. T.; Gustafson, C. L.; Mcgraw, G. A.

    1988-01-01

    This paper quantifies the known limitation of lattice least squares to ARX models in terms of the dynamic properties of the system being modeled. This allows determination of the applicability of lattice least squares in a given situation. The central result is that an equivalent ARX model exists for an ARMAX system if and only if the ARMAX system has no transmission zeros from the noise port to the output port. The technique used to prove this fact is a construction using the matrix fractional description of the system. The final section presents two computational examples.

  11. Computing circles and spheres of arithmitic least squares

    NASA Astrophysics Data System (ADS)

    Nievergelt, Yves

    1994-07-01

    A proof of the existence and uniqueness of L. Moura and R. Kitney's circle of least squares leads to estimates of the accuracy with which a computer can determine that circle. The result shows that the accuracy deteriorates as the correlation between the coordinates of the data points increases in magnitude. Yet a numerically more stable computation of eigenvectors yields the limiting straight line, which a further analysis reveals to be the line of total least squares. The same analysis also provides generalizations to fitting spheres in higher dimensions.

  12. On realizations of least-squares estimation and Kalman filtering by systolic arrays

    NASA Technical Reports Server (NTRS)

    Chen, M. J.; Yao, K.

    1986-01-01

    Least-squares (LS) estimation is a basic operation in many signal processing problems. Given y = Ax + v, where A is a m x n coefficient matrix, y is a m x 1 observation vector, and v is a m x 1 zero mean white noise vector, a simple least-squares solution is finding the estimated vector x which minimizes the norm of /Ax-y/. It is well known that for an ill-conditioned matrix A, solving least-squares problems by orthogonal triangular (QR) decomposition and back substitution has robust numerical properties under finite word length effect since 2-norm is preserved. Many fast algorithms have been proposed and applied to systolic arrays. Gentleman-Kung (1981) first presented the trianglular systolic array for a basic Givens reduction. McWhirter (1983) used this array structure to find the least-squares estimation errors. Then by geometric approach, several different systolic array realizations of the recursive least-squares estimation algorithms of Lee et al (1981) were derived by Kalson-Yao (1985). Basic QR decomposition algorithms are considered in this paper and it is found that under a one-row time updating situation, the Householder transformation degenerates to a simple Givens reduction. Next, an improved least-squares estimation algorithm is derived by considering a modified version of fast Givens reduction. From this approach, the basic relationship between Givens reduction and Modified-Gram-Schmidt transformation can easily be understood. This improved algorithm also has simpler computational and inter-cell connection complexities while compared with other known least-squares algorithms and is more realistic for systolic array implementation.

  13. Parallel block schemes for large scale least squares computations

    SciTech Connect

    Golub, G.H.; Plemmons, R.J.; Sameh, A.

    1986-04-01

    Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment of the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.

  14. Least squares approximation of two-dimensional FIR digital filters

    NASA Astrophysics Data System (ADS)

    Alliney, S.; Sgallari, F.

    1980-02-01

    In this paper, a new method for the synthesis of two-dimensional FIR digital filters is presented. The method is based on a least-squares approximation of the ideal frequency response; an orthogonality property of certain functions, related to the frequency sampling design, improves the computational efficiency.

  15. A Genetic Algorithm Approach to Nonlinear Least Squares Estimation

    ERIC Educational Resources Information Center

    Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.

    2004-01-01

    A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…

  16. SAS Partial Least Squares (PLS) for Discriminant Analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this work was to implement discriminant analysis using SAS partial least squares (PLS) regression for analysis of spectral data. This was done in combination with previous efforts which implemented data pre-treatments including scatter correction, derivatives, mean centering, and v...

  17. On the Routh approximation technique and least squares errors

    NASA Technical Reports Server (NTRS)

    Aburdene, M. F.; Singh, R.-N. P.

    1979-01-01

    A new method for calculating the coefficients of the numerator polynomial of the direct Routh approximation method (DRAM) using the least square error criterion is formulated. The necessary conditions have been obtained in terms of algebraic equations. The method is useful for low frequency as well as high frequency reduced-order models.

  18. Least-Squares Adaptive Control Using Chebyshev Orthogonal Polynomials

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Burken, John; Ishihara, Abraham

    2011-01-01

    This paper presents a new adaptive control approach using Chebyshev orthogonal polynomials as basis functions in a least-squares functional approximation. The use of orthogonal basis functions improves the function approximation significantly and enables better convergence of parameter estimates. Flight control simulations demonstrate the effectiveness of the proposed adaptive control approach.

  19. On the equivalence of Kalman filtering and least-squares estimation

    NASA Astrophysics Data System (ADS)

    Mysen, E.

    2016-07-01

    The Kalman filter is derived directly from the least-squares estimator, and generalized to accommodate stochastic processes with time variable memory. To complete the link between least-squares estimation and Kalman filtering of first-order Markov processes, a recursive algorithm is presented for the computation of the off-diagonal elements of the a posteriori least-squares error covariance. As a result of the algebraic equivalence of the two estimators, both approaches can fully benefit from the advantages implied by their individual perspectives. In particular, it is shown how Kalman filter solutions can be integrated into the normal equation formalism that is used for intra- and inter-technique combination of space geodetic data.

  20. Constrained hierarchical least square nonlinear equation solvers. [for indefinite stiffness and large structural deformations

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Lackney, J.

    1986-01-01

    The current paper develops a constrained hierarchical least square nonlinear equation solver. The procedure can handle the response behavior of systems which possess indefinite tangent stiffness characteristics. Due to the generality of the scheme, this can be achieved at various hierarchical application levels. For instance, in the case of finite element simulations, various combinations of either degree of freedom, nodal, elemental, substructural, and global level iterations are possible. Overall, this enables a solution methodology which is highly stable and storage efficient. To demonstrate the capability of the constrained hierarchical least square methodology, benchmarking examples are presented which treat structure exhibiting highly nonlinear pre- and postbuckling behavior wherein several indefinite stiffness transitions occur.

  1. Least-squares streamline diffusion finite element approximations to singularly perturbed convection-diffusion problems

    SciTech Connect

    Lazarov, R D; Vassilevski, P S

    1999-05-06

    In this paper we introduce and study a least-squares finite element approximation for singularly perturbed convection-diffusion equations of second order. By introducing the flux (diffusive plus convective) as a new unknown, the problem is written in a mixed form as a first order system. Further, the flux is augmented by adding the lower order terms with a small parameter. The new first order system is approximated by the least-squares finite element method using the minus one norm approach of Bramble, Lazarov, and Pasciak [2]. Further, we estimate the error of the method and discuss its implementation and the numerical solution of some test problems.

  2. Source allocation by least-squares hydrocarbon fingerprint matching

    SciTech Connect

    William A. Burns; Stephen M. Mudge; A. Edward Bence; Paul D. Boehm; John S. Brown; David S. Page; Keith R. Parker

    2006-11-01

    There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS. 15 refs., 5 figs.

  3. Source allocation by least-squares hydrocarbon fingerprint matching.

    PubMed

    Burns, William A; Mudge, Stephen M; Bence, A Edward; Boehm, Paul D; Brown, John S; Page, David S; Parker, Keith R

    2006-11-01

    There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS. PMID:17144278

  4. Weighted discrete least-squares polynomial approximation using randomized quadratures

    NASA Astrophysics Data System (ADS)

    Zhou, Tao; Narayan, Akil; Xiu, Dongbin

    2015-10-01

    We discuss the problem of polynomial approximation of multivariate functions using discrete least squares collocation. The problem stems from uncertainty quantification (UQ), where the independent variables of the functions are random variables with specified probability measure. We propose to construct the least squares approximation on points randomly and uniformly sampled from tensor product Gaussian quadrature points. We analyze the stability properties of this method and prove that the method is asymptotically stable, provided that the number of points scales linearly (up to a logarithmic factor) with the cardinality of the polynomial space. Specific results in both bounded and unbounded domains are obtained, along with a convergence result for Chebyshev measure. Numerical examples are provided to verify the theoretical results.

  5. Assessment of weighted-least-squares-based gas path analysis

    NASA Astrophysics Data System (ADS)

    Doel, D. L.

    1994-04-01

    Manufacturers of gas turbines have searched for three decades for a reliable way to use gas path measurements to determine the health of jet engine components. They have been hindered in this pursuit by the quality of the measurements used to carry out the analysis. Engine manufacturers have chosen weighted-least-squares techniques to reduce the inaccuracy caused by sensor error. While these algorithms are clearly an improvement over the previous generation of gas path analysis programs, they still fail in many situations. This paper describes some of the failures and explores their relationship to the underlying analysis technique. It also describes difficulties in implementing a gas path analysis program. The paper concludes with an appraisal of weighted-least-squares-based gas path analysis.

  6. Anisotropy minimization via least squares method for transformation optics.

    PubMed

    Junqueira, Mateus A F C; Gabrielli, Lucas H; Spadoti, Danilo H

    2014-07-28

    In this work the least squares method is used to reduce anisotropy in transformation optics technique. To apply the least squares method a power series is added on the coordinate transformation functions. The series coefficients were calculated to reduce the deviations in Cauchy-Riemann equations, which, when satisfied, result in both conformal transformations and isotropic media. We also present a mathematical treatment for the special case of transformation optics to design waveguides. To demonstrate the proposed technique a waveguide with a 30° of bend and with a 50% of increase in its output width was designed. The results show that our technique is simultaneously straightforward to be implement and effective in reducing the anisotropy of the transformation for an extremely low value close to zero. PMID:25089468

  7. Least-squares estimation of batch culture kinetic parameters.

    PubMed

    Ong, S L

    1983-10-01

    This article concerns the development of a simple and effective least-squares procedure for estimating the kinetic parameters in Monod expressions from batch culture data. The basic approach employed in this work was to translate the problem of parameter estimation to a mathematical model containing a single decision variable. The resulting model was then solved by an efficient one-dimensional search algorithm which can be adapted to any microcomputer or advanced programmable calculator. The procedure was tested on synthetic data (substrate concentrations) with different types and levels of error. The effect of endogeneous respiration on the estimated values of the kinetic parameters was also assessed. From the results of these analyses the least-squares procedure developed was concluded to be very effective. PMID:18548565

  8. Speckle reduction by phase-based weighted least squares.

    PubMed

    Zhu, Lei; Wang, Weiming; Qin, Jing; Heng, Pheng-Ann

    2014-01-01

    Although ultrasonography has been widely used in clinical applications, the doctor suffers great difficulties in diagnosis due to the artifacts of ultrasound images, especially the speckle noise. This paper proposes a novel framework for speckle reduction by using a phase-based weighted least squares optimization. The proposed approach can effectively smooth out speckle noise while preserving the features in the image, e.g., edges with different contrasts. To this end, we first employ a local phase-based measure, which is theoretically intensity-invariant, to extract the edge map from the input image. The edge map is then incorporated into the weighted least squares framework to supervise the optimization during despeckling, so that low contrast edges can be retained while the noise has been greatly removed. Experimental results in synthetic and clinical ultrasound images demonstrate that our approach performs better than state-of-the-art methods. PMID:25570846

  9. Least-squares finite element methods for quantum chromodynamics

    SciTech Connect

    Ketelsen, Christian; Brannick, J; Manteuffel, T; Mccormick, S

    2008-01-01

    A significant amount of the computational time in large Monte Carlo simulations of lattice quantum chromodynamics (QCD) is spent inverting the discrete Dirac operator. Unfortunately, traditional covariant finite difference discretizations of the Dirac operator present serious challenges for standard iterative methods. For interesting physical parameters, the discretized operator is large and ill-conditioned, and has random coefficients. More recently, adaptive algebraic multigrid (AMG) methods have been shown to be effective preconditioners for Wilson's discretization of the Dirac equation. This paper presents an alternate discretization of the Dirac operator based on least-squares finite elements. The discretization is systematically developed and physical properties of the resulting matrix system are discussed. Finally, numerical experiments are presented that demonstrate the effectiveness of adaptive smoothed aggregation ({alpha}SA ) multigrid as a preconditioner for the discrete field equations resulting from applying the proposed least-squares FE formulation to a simplified test problem, the 2d Schwinger model of quantum electrodynamics.

  10. Generalized Least Squares Estimators in the Analysis of Covariance Structures.

    ERIC Educational Resources Information Center

    Browne, Michael W.

    This paper concerns situations in which a p x p covariance matrix is a function of an unknown q x 1 parameter vector y-sub-o. Notation is defined in the second section, and some algebraic results used in subsequent sections are given. Section 3 deals with asymptotic properties of generalized least squares (G.L.S.) estimators of y-sub-o. Section 4…

  11. Least-squares finite element method for fluid dynamics

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Povinelli, Louis A.

    1989-01-01

    An overview is given of new developments of the least squares finite element method (LSFEM) in fluid dynamics. Special emphasis is placed on the universality of LSFEM; the symmetry and positiveness of the algebraic systems obtained from LSFEM; the accommodation of LSFEM to equal order interpolations for incompressible viscous flows; and the natural numerical dissipation of LSFEM for convective transport problems and high speed compressible flows. The performance of LSFEM is illustrated by numerical examples.

  12. Least squares restoration of multi-channel images

    NASA Technical Reports Server (NTRS)

    Chin, Roland T.; Galatsanos, Nikolas P.

    1989-01-01

    In this paper, a least squares filter for the restoration of multichannel imagery is presented. The restoration filter is based on a linear, space-invariant imaging model and makes use of an iterative matrix inversion algorithm. The restoration utilizes both within-channel (spatial) and cross-channel information as constraints. Experiments using color images (three-channel imagery with red, green, and blue components) were performed to evaluate the filter's performance and to compare it with other monochrome and multichannel filters.

  13. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  14. An Alternating Least Squares Method for the Weighted Approximation of a Symmetric Matrix.

    ERIC Educational Resources Information Center

    ten Berge, Jos M. F.; Kiers, Henk A. L.

    1993-01-01

    R. A. Bailey and J. C. Gower explored approximating a symmetric matrix "B" by another, "C," in the least squares sense when the squared discrepancies for diagonal elements receive specific nonunit weights. A solution is proposed where "C" is constrained to be positive semidefinite and of a fixed rank. (SLD)

  15. Least-Squares Approximation of an Improper Correlation Matrix by a Proper One.

    ERIC Educational Resources Information Center

    Knol, Dirk L.; ten Berge, Jos M. F.

    1989-01-01

    An algorithm, based on a solution for C. I. Mosier's oblique Procrustes rotation problem, is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. Results are of interest for missing value and tetrachoric correlation, indefinite matrix correlation, and constrained…

  16. Comparing implementations of penalized weighted least-squares sinogram restoration

    PubMed Central

    Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick

    2010-01-01

    Purpose: A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. Methods: The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix

  17. Comparing implementations of penalized weighted least-squares sinogram restoration

    SciTech Connect

    Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick

    2010-11-15

    Purpose: A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. Methods: The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix

  18. Partial least squares Cox regression for genome-wide data.

    PubMed

    Nygård, Ståle; Borgan, Ornulf; Lingjaerde, Ole Christian; Størvold, Hege Leite

    2008-06-01

    Most methods for survival prediction from high-dimensional genomic data combine the Cox proportional hazards model with some technique of dimension reduction, such as partial least squares regression (PLS). Applying PLS to the Cox model is not entirely straightforward, and multiple approaches have been proposed. The method of Park etal. (Bioinformatics 18(Suppl. 1):S120-S127, 2002) uses a reformulation of the Cox likelihood to a Poisson type likelihood, thereby enabling estimation by iteratively reweighted partial least squares for generalized linear models. We propose a modification of the method of Park et al. (2002) such that estimates of the baseline hazard and the gene effects are obtained in separate steps. The resulting method has several advantages over the method of Park et al. (2002) and other existing Cox PLS approaches, as it allows for estimation of survival probabilities for new patients, enables a less memory-demanding estimation procedure, and allows for incorporation of lower-dimensional non-genomic variables like disease grade and tumor thickness. We also propose to combine our Cox PLS method with an initial gene selection step in which genes are ordered by their Cox score and only the highest-ranking k% of the genes are retained, obtaining a so-called supervised partial least squares regression method. In simulations, both the unsupervised and the supervised version outperform other Cox PLS methods. PMID:18188699

  19. Application of the Polynomial-Based Least Squares and Total Least Squares Models for the Attenuated Total Reflection Fourier Transform Infrared Spectra of Binary Mixtures of Hydroxyl Compounds.

    PubMed

    Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang

    2016-03-01

    An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS. PMID:26810185

  20. Taking correlations in GPS least squares adjustments into account with a diagonal covariance matrix

    NASA Astrophysics Data System (ADS)

    Kermarrec, Gaël; Schön, Steffen

    2016-05-01

    Based on the results of Luati and Proietti (Ann Inst Stat Math 63:673-686, 2011) on an equivalence for a certain class of polynomial regressions between the diagonally weighted least squares (DWLS) and the generalized least squares (GLS) estimator, an alternative way to take correlations into account thanks to a diagonal covariance matrix is presented. The equivalent covariance matrix is much easier to compute than a diagonalization of the covariance matrix via eigenvalue decomposition which also implies a change of the least squares equations. This condensed matrix, for use in the least squares adjustment, can be seen as a diagonal or reduced version of the original matrix, its elements being simply the sums of the rows elements of the weighting matrix. The least squares results obtained with the equivalent diagonal matrices and those given by the fully populated covariance matrix are mathematically strictly equivalent for the mean estimator in terms of estimate and its a priori cofactor matrix. It is shown that this equivalence can be empirically extended to further classes of design matrices such as those used in GPS positioning (single point positioning, precise point positioning or relative positioning with double differences). Applying this new model to simulated time series of correlated observations, a significant reduction of the coordinate differences compared with the solutions computed with the commonly used diagonal elevation-dependent model was reached for the GPS relative positioning with double differences, single point positioning as well as precise point positioning cases. The estimate differences between the equivalent and classical model with fully populated covariance matrix were below the mm for all simulated GPS cases and below the sub-mm for the relative positioning with double differences. These results were confirmed by analyzing real data. Consequently, the equivalent diagonal covariance matrices, compared with the often used elevation

  1. A negative-norm least squares method for Reissner-Mindlin plates

    NASA Astrophysics Data System (ADS)

    Bramble, J. H.; Sun, T.

    1998-07-01

    In this paper a least squares method, using the minus one norm developed by Bramble, Lazarov, and Pasciak, is introduced to approximate the solution of the Reissner-Mindlin plate problem with small parameter t, the thickness of the plate. The reformulation of Brezzi and Fortin is employed to prevent locking. Taking advantage of the least squares approach, we use only continuous finite elements for all the unknowns. In particular, we may use continuous linear finite elements. The difficulty of satisfying the inf-sup condition is overcome by the introduction of a stabilization term into the least squares bilinear form, which is very cheap computationally. It is proved that the error of the discrete solution is optimal with respect to regularity and uniform with respect to the parameter t. Apart from the simplicity of the elements, the stability theorem gives a natural block diagonal preconditioner of the resulting least squares system. For each diagonal block, one only needs a preconditioner for a second order elliptic problem.

  2. Positive Scattering Cross Sections using Constrained Least Squares

    SciTech Connect

    Dahl, J.A.; Ganapol, B.D.; Morel, J.E.

    1999-09-27

    A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented.

  3. Simultaneous least squares fitter based on the Lagrange multiplier method

    NASA Astrophysics Data System (ADS)

    Guan, Ying-Hui; Lü, Xiao-Rui; Zheng, Yang-Heng; Zhu, Yong-Sheng

    2013-10-01

    We developed a least squares fitter used for extracting expected physics parameters from the correlated experimental data in high energy physics. This fitter considers the correlations among the observables and handles the nonlinearity using linearization during the χ2 minimization. This method can naturally be extended to the analysis with external inputs. By incorporating with Lagrange multipliers, the fitter includes constraints among the measured observables and the parameters of interest. We applied this fitter to the study of the D0-D¯0 mixing parameters as the test-bed based on MC simulation. The test results show that the fitter gives unbiased estimators with correct uncertainties and the approach is credible.

  4. Recursive least squares estimation and Kalman filtering by systolic arrays

    NASA Technical Reports Server (NTRS)

    Chen, M. J.; Yao, K.

    1988-01-01

    One of the most promising new directions for high-throughput-rate problems is that based on systolic arrays. In this paper, using the matrix-decomposition approach, a systolic Kalman filter is formulated as a modified square-root information filter consisting of a whitening filter followed by a simple least-squares operation based on the systolic QR algorithm. By proper skewing of the input data, a fully pipelined time and measurement update systolic Kalman filter can be achieved with O(n squared) processing cells, resulting in a system throughput rate of O (n).

  5. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    DOEpatents

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  6. EFFICIENCY OF LEAST SQUARES ESTIMATORS IN THE PRESENCE OF SPATIAL AUTOCORRELATION

    EPA Science Inventory

    The authors consider the effect of spatial autocorrelation on inferences made using ordinary least squares estimation. it is found, in some cares, that ordinary least squares estimators provide a reasonable alternative to the estimated ' generalized least squares estimators recom...

  7. Revisiting the Least-squares Procedure for Gradient Reconstruction on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Thomas, James L. (Technical Monitor)

    2003-01-01

    The accuracy of the least-squares technique for gradient reconstruction on unstructured meshes is examined. While least-squares techniques produce accurate results on arbitrary isotropic unstructured meshes, serious difficulties exist for highly stretched meshes in the presence of surface curvature. In these situations, gradients are typically under-estimated by up to an order of magnitude. For vertex-based discretizations on triangular and quadrilateral meshes, and cell-centered discretizations on quadrilateral meshes, accuracy can be recovered using an inverse distance weighting in the least-squares construction. For cell-centered discretizations on triangles, both the unweighted and weighted least-squares constructions fail to provide suitable gradient estimates for highly stretched curved meshes. Good overall flow solution accuracy can be retained in spite of poor gradient estimates, due to the presence of flow alignment in exactly the same regions where the poor gradient accuracy is observed. However, the use of entropy fixes has the potential for generating large but subtle discretization errors.

  8. A least-squares framework for Component Analysis.

    PubMed

    De la Torre, Fernando

    2012-06-01

    Over the last century, Component Analysis (CA) methods such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Canonical Correlation Analysis (CCA), Locality Preserving Projections (LPP), and Spectral Clustering (SC) have been extensively used as a feature extraction step for modeling, classification, visualization, and clustering. CA techniques are appealing because many can be formulated as eigen-problems, offering great potential for learning linear and nonlinear representations of data in closed-form. However, the eigen-formulation often conceals important analytic and computational drawbacks of CA techniques, such as solving generalized eigen-problems with rank deficient matrices (e.g., small sample size problem), lacking intuitive interpretation of normalization factors, and understanding commonalities and differences between CA methods. This paper proposes a unified least-squares framework to formulate many CA methods. We show how PCA, LDA, CCA, LPP, SC, and its kernel and regularized extensions correspond to a particular instance of least-squares weighted kernel reduced rank regression (LS--WKRRR). The LS-WKRRR formulation of CA methods has several benefits: 1) provides a clean connection between many CA techniques and an intuitive framework to understand normalization factors; 2) yields efficient numerical schemes to solve CA techniques; 3) overcomes the small sample size problem; 4) provides a framework to easily extend CA methods. We derive weighted generalizations of PCA, LDA, SC, and CCA, and several new CA techniques. PMID:21911913

  9. Forecasting Istanbul monthly temperature by multivariate partial least square

    NASA Astrophysics Data System (ADS)

    Ertaç, Mefharet; Firuzan, Esin; Solum, Şenol

    2015-07-01

    Weather forecasting, especially for temperature, has always been a popular subject since it affects our daily life and always includes uncertainty as statistics does. The goals of this study are (a) to forecast monthly mean temperature by benefitting meteorological variables like temperature, humidity and rainfall; and (b) to improve the forecast ability by evaluating the forecasting errors depending upon the parameter changes and local or global forecasting methods. Approximately 100 years of meteorological data from 54 automatic meteorology observation stations of Istanbul that is the mega city of Turkey are analyzed to infer about the meteorological behaviour of the city. A new partial least square (PLS) forecasting technique based on chaotic analysis is also developed by using nonlinear time series and variable selection methods. The proposed model is also compared with artificial neural networks (ANNs), which model nonlinearly the relation between inputs and outputs by working neurons like human brain. Ordinary least square (OLS), PLS and ANN methods are used for nonlinear time series forecasting in this study. Major findings are the chaotic nature of the meteorological data of Istanbul and the best performance values of the proposed PLS model.

  10. Faraday rotation data analysis with least-squares elliptical fitting

    NASA Astrophysics Data System (ADS)

    White, Adam D.; McHale, G. Brent; Goerz, David A.; Speer, Ron D.

    2010-10-01

    A method of analyzing Faraday rotation data from pulsed magnetic field measurements is described. The method uses direct least-squares elliptical fitting to measured data. The least-squares fit conic parameters are used to rotate, translate, and rescale the measured data. Interpretation of the transformed data provides improved accuracy and time-resolution characteristics compared with many existing methods of analyzing Faraday rotation data. The method is especially useful when linear birefringence is present at the input or output of the sensing medium, or when the relative angle of the polarizers used in analysis is not aligned with precision; under these circumstances the method is shown to return the analytically correct input signal. The method may be pertinent to other applications where analysis of Lissajous figures is required, such as the velocity interferometer system for any reflector (VISAR) diagnostics. The entire algorithm is fully automated and requires no user interaction. An example of algorithm execution is shown, using data from a fiber-based Faraday rotation sensor on a capacitive discharge experiment.

  11. An Incremental Weighted Least Squares Approach to Surface Lights Fields

    NASA Astrophysics Data System (ADS)

    Coombe, Greg; Lastra, Anselmo

    An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.

  12. Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression

    PubMed Central

    Chen, Yanguang

    2016-01-01

    In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson’s statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran’s index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China’s regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test. PMID:26800271

  13. Cross-term free based bistatic radar system using sparse least squares

    NASA Astrophysics Data System (ADS)

    Sevimli, R. Akin; Cetin, A. Enis

    2015-05-01

    Passive Bistatic Radar (PBR) systems use illuminators of opportunity, such as FM, TV, and DAB broadcasts. The most common illuminator of opportunity used in PBR systems is the FM radio stations. Single FM channel based PBR systems do not have high range resolution and may turn out to be noisy. In order to enhance the range resolution of the PBR systems algorithms using several FM channels at the same time are proposed. In standard methods, consecutive FM channels are translated to baseband as is and fed to the matched filter to compute the range-Doppler map. Multichannel FM based PBR systems have better range resolution than single channel systems. However superious sidelobe peaks occur as a side effect. In this article, we linearly predict the surveillance signal using the modulated and delayed reference signal components. We vary the modulation frequency and the delay to cover the entire range-Doppler plane. Whenever there is a target at a specific range value and Doppler value the prediction error is minimized. The cost function of the linear prediction equation has three components. The first term is the real-part of the ordinary least squares term, the second-term is the imaginary part of the least squares and the third component is the l2-norm of the prediction coefficients. Separate minimization of real and imaginary parts reduces the side lobes and decrease the noise level of the range-Doppler map. The third term enforces the sparse solution on the least squares problem. We experimentally observed that this approach is better than both the standard least squares and other sparse least squares approaches in terms of side lobes. Extensive simulation examples will be presented in the final form of the paper.

  14. Near-least-squares radio frequency interference suppression

    NASA Astrophysics Data System (ADS)

    Miller, Timothy R.; McCorkle, John W.; Potter, Lee C.

    1995-06-01

    We present an algorithm for the removal of narrow-band interference from wideband signals. We apply the algorithm to suppress radio frequency interference encountered by ultra- wideband synthetic aperture radar systems used for foliage- and ground-penetrating imaging. For this application, we seek maximal reduction of interference energy, minimal loss and distortion of wideband target responses, and real-time implementation. To balance these competing objectives, we exploit prior information concerning the interference environment in designing an estimate-and-subtract-estimation algorithm. The use of prior knowledge allows fast, near-least-squares estimation of the interference and permits iterative target signature excision in the interference estimation procedure to decrease estimation bias. The results is greater interference suppression, less target signature loss and distortion, and faster computation than is provided by existing techniques.

  15. Intelligent Quality Prediction Using Weighted Least Square Support Vector Regression

    NASA Astrophysics Data System (ADS)

    Yu, Yaojun

    A novel quality prediction method with mobile time window is proposed for small-batch producing process based on weighted least squares support vector regression (LS-SVR). The design steps and learning algorithm are also addressed. In the method, weighted LS-SVR is taken as the intelligent kernel, with which the small-batch learning is solved well and the nearer sample is set a larger weight, while the farther is set the smaller weight in the history data. A typical machining process of cutting bearing outer race is carried out and the real measured data are used to contrast experiment. The experimental results demonstrate that the prediction accuracy of the weighted LS-SVR based model is only 20%-30% that of the standard LS-SVR based one in the same condition. It provides a better candidate for quality prediction of small-batch producing process.

  16. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2016-01-01

    A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

  17. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    NASA Astrophysics Data System (ADS)

    Wang, Qiqi; Hu, Rui; Blonigan, Patrick

    2014-06-01

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned "least squares shadowing (LSS) problem". The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

  18. Local validation of EU-DEM using Least Squares Collocation

    NASA Astrophysics Data System (ADS)

    Ampatzidis, Dimitrios; Mouratidis, Antonios; Gruber, Christian; Kampouris, Vassilios

    2016-04-01

    In the present study we are dealing with the evaluation of the European Digital Elevation Model (EU-DEM) in a limited area, covering few kilometers. We compare EU-DEM derived vertical information against orthometric heights obtained by classical trigonometric leveling for an area located in Northern Greece. We apply several statistical tests and we initially fit a surface model, in order to quantify the existing biases and outliers. Finally, we implement a methodology for orthometric heights prognosis, using the Least Squares Collocation for the remaining residuals of the first step (after the fitted surface application). Our results, taking into account cross validation points, reveal a local consistency between EU-DEM and official heights, which is better than 1.4 meters.

  19. Flow Applications of the Least Squares Finite Element Method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan

    1998-01-01

    The main thrust of the effort has been towards the development, analysis and implementation of the least-squares finite element method (LSFEM) for fluid dynamics and electromagnetics applications. In the past year, there were four major accomplishments: 1) special treatments in computational fluid dynamics and computational electromagnetics, such as upwinding, numerical dissipation, staggered grid, non-equal order elements, operator splitting and preconditioning, edge elements, and vector potential are unnecessary; 2) the analysis of the LSFEM for most partial differential equations can be based on the bounded inverse theorem; 3) the finite difference and finite volume algorithms solve only two Maxwell equations and ignore the divergence equations; and 4) the first numerical simulation of three-dimensional Marangoni-Benard convection was performed using the LSFEM.

  20. Random errors in interferometry with the least-squares method

    SciTech Connect

    Wang Qi

    2011-01-20

    This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships have also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.

  1. Material characterization via least squares support vector machines

    NASA Astrophysics Data System (ADS)

    Swaddiwudhipong, S.; Tho, K. K.; Liu, Z. S.; Hua, J.; Ooi, N. S. B.

    2005-09-01

    Analytical methods to interpret the load indentation curves are difficult to formulate and execute directly due to material and geometric nonlinearities as well as complex contact interactions. In the present study, a new approach based on the least squares support vector machines (LS-SVMs) is adopted in the characterization of materials obeying power law strain-hardening. The input data for training and verification of the LS-SVM model are obtained from 1000 large strain-large deformation finite element analyses which were carried out earlier to simulate indentation tests. The proposed LS-SVM model relates the characteristics of the indentation load-displacement curve directly to the elasto-plastic material properties without resorting to any iterative schemes. The tuned LS-SVM model is able to accurately predict the material properties when presented with new sets of load-indentation curves which were not used in the training and verification of the model.

  2. Recursive least square vehicle mass estimation based on acceleration partition

    NASA Astrophysics Data System (ADS)

    Feng, Yuan; Xiong, Lu; Yu, Zhuoping; Qu, Tong

    2014-05-01

    Vehicle mass is an important parameter in vehicle dynamics control systems. Although many algorithms have been developed for the estimation of mass, none of them have yet taken into account the different types of resistance that occur under different conditions. This paper proposes a vehicle mass estimator. The estimator incorporates road gradient information in the longitudinal accelerometer signal, and it removes the road grade from the longitudinal dynamics of the vehicle. Then, two different recursive least square method (RLSM) schemes are proposed to estimate the driving resistance and the mass independently based on the acceleration partition under different conditions. A 6 DOF dynamic model of four In-wheel Motor Vehicle is built to assist in the design of the algorithm and in the setting of the parameters. The acceleration limits are determined to not only reduce the estimated error but also ensure enough data for the resistance estimation and mass estimation in some critical situations. The modification of the algorithm is also discussed to improve the result of the mass estimation. Experiment data on a sphalt road, plastic runway, and gravel road and on sloping roads are used to validate the estimation algorithm. The adaptability of the algorithm is improved by using data collected under several critical operating conditions. The experimental results show the error of the estimation process to be within 2.6%, which indicates that the algorithm can estimate mass with great accuracy regardless of the road surface and gradient changes and that it may be valuable in engineering applications. This paper proposes a recursive least square vehicle mass estimation method based on acceleration partition.

  3. Spreadsheet for designing valid least-squares calibrations: A tutorial.

    PubMed

    Bettencourt da Silva, Ricardo J N

    2016-02-01

    Instrumental methods of analysis are used to define the price of goods, the compliance of products with a regulation, or the outcome of fundamental or applied research. These methods can only play their role properly if reported information is objective and their quality is fit for the intended use. If measurement results are reported with an adequately small measurement uncertainty both of these goals are achieved. The evaluation of the measurement uncertainty can be performed by the bottom-up approach, that involves a detailed description of the measurement process, or using a pragmatic top-down approach that quantify major uncertainty components from global performance data. The bottom-up approach is not so frequently used due to the need to master the quantification of individual components responsible for random and systematic effects that affect measurement results. This work presents a tutorial that can be easily used by non-experts in the accurate evaluation of the measurement uncertainty of instrumental methods of analysis calibrated using least-squares regressions. The tutorial includes the definition of the calibration interval, the assessments of instrumental response homoscedasticity, the definition of calibrators preparation procedure required for least-squares regression model application, the assessment of instrumental response linearity and the evaluation of measurement uncertainty. The developed measurement model is only applicable in calibration ranges where signal precision is constant. A MS-Excel file is made available to allow the easy application of the tutorial. This tool can be useful for cases where top-down approaches cannot produce results with adequately low measurement uncertainty. An example of the application of this tool to the determination of nitrate in water by ion chromatography is presented. PMID:26653439

  4. A method for obtaining a least squares fit of a hyperplane to uncertain data

    SciTech Connect

    Reister, D.B.; Morris, M.D.

    1994-05-01

    For many least squares problems, the uncertainty is in one of the variables [for example, y = f(x) or z = f(x,y)]. However, for some problems, the uncertainty is in the geometric transformation from measured data to Cartesian coordinates and all of the calculated variables are uncertain. When we seek the best least squares fit of a hyperplane to the data, we obtain an over determined system (we have n + l equations to determine n unknowns). By neglecting one of the equations at a time, we can obtain n + l different solutions for the unknown parameters. However, we cannot average the n + l hyperplanes to obtain a single best estimate. To obtain a solution without neglecting any of the equations, we solve an eigenvalue problem and use the eigenvector associated with the smallest eigenvalue to determine the unknown parameters. We have performed numerical experiments that compare our eigenvalue method to the approach of neglecting one equation at a time.

  5. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS

    SciTech Connect

    Hong Luo; Yidong Xia; Robert Nourgaliev

    2011-05-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.

  6. Least squares adjustment of large-scale geodetic networks by orthogonal decomposition

    SciTech Connect

    George, J.A.; Golub, G.H.; Heath, M.T.; Plemmons, R.J.

    1981-11-01

    This article reviews some recent developments in the solution of large sparse least squares problems typical of those arising in geodetic adjustment problems. The new methods are distinguished by their use of orthogonal transformations which tend to improve numerical accuracy over the conventional approach based on the use of the normal equations. The adaptation of these new schemes to allow for the use of auxiliary storage and their extension to rank deficient problems are also described.

  7. The incomplete inverse and its applications to the linear least squares problem

    NASA Technical Reports Server (NTRS)

    Morduch, G. E.

    1977-01-01

    A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.

  8. Least-Squares Neutron Spectral Adjustment with STAYSL PNNL

    NASA Astrophysics Data System (ADS)

    Greenwood, L. R.; Johnson, C. D.

    2016-02-01

    The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002) [3], which was sponsored by the International Atomic Energy Agency (IAEA), though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF) [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up) within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the SigPhi Calculator

  9. On least squares approximations to indefinite problems of the mixed type

    NASA Technical Reports Server (NTRS)

    Fix, G. J.; Gunzburger, M. D.

    1978-01-01

    A least squares method is presented for computing approximate solutions of indefinite partial differential equations of the mixed type such as those that arise in connection with transonic flutter analysis. The method retains the advantages of finite difference schemes namely simplicity and sparsity of the resulting matrix system. However, it offers some great advantages over finite difference schemes. First, the method is insensitive to the value of the forcing frequency, i.e., the resulting matrix system is always symmetric and positive definite. As a result, iterative methods may be successfully employed to solve the matrix system, thus taking full advantage of the sparsity. Furthermore, the method is insensitive to the type of the partial differential equation, i.e., the computational algorithm is the same in elliptic and hyperbolic regions. In this work the method is formulated and numerical results for model problems are presented. Some theoretical aspects of least squares approximations are also discussed.

  10. Least squares finite element method with high continuity NURBS basis for incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Chen, De-Xiang; Xu, Zi-Li; Liu, Shi; Feng, Yong-Xin

    2014-03-01

    Modern least squares finite element method (LSFEM) has advantage over mixed finite element method for non-self-adjoint problem like Navier-Stokes equations, but has problem to be norm equivalent and mass conservative when using C0 type basis. In this paper, LSFEM with non-uniform B-splines (NURBS) is proposed for Navier-Stokes equations. High order continuity NURBS is used to construct the finite-dimensional spaces for both velocity and pressure. Variational form is derived from the governing equations with primitive variables and the DOFs due to additional variables are not necessary. There is a novel k-refinement which has spectral convergence of least squares functional. The method also has same advantages as isogeometric analysis like automatic mesh generation and exact geometry representation. Several benchmark problems are solved using the proposed method. The results agree well with the benchmark solutions available in literature. The results also show good performance in mass conservation.

  11. Accuracy of least-squares methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bochev, Pavel B.; Gunzburger, Max D.

    1993-01-01

    Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.

  12. A least-squares computational ``tool kit``. Nuclear data and measurements series

    SciTech Connect

    Smith, D.L.

    1993-04-01

    The information assembled in this report is intended to offer a useful computational ``tool kit`` to individuals who are interested in a variety of practical applications for the least-squares method of parameter estimation. The fundamental principles of Bayesian analysis are outlined first and these are applied to development of both the simple and the generalized least-squares conditions. Formal solutions that satisfy these conditions are given subsequently. Their application to both linear and non-linear problems is described in detail. Numerical procedures required to implement these formal solutions are discussed and two utility computer algorithms are offered for this purpose (codes LSIOD and GLSIOD written in FORTRAN). Some simple, easily understood examples are included to illustrate the use of these algorithms. Several related topics are then addressed, including the generation of covariance matrices, the role of iteration in applications of least-squares procedures, the effects of numerical precision and an approach that can be pursued in developing data analysis packages that are directed toward special applications.

  13. Parsimonious extreme learning machine using recursive orthogonal least squares.

    PubMed

    Wang, Ning; Er, Meng Joo; Han, Min

    2014-10-01

    Novel constructive and destructive parsimonious extreme learning machines (CP- and DP-ELM) are proposed in this paper. By virtue of the proposed ELMs, parsimonious structure and excellent generalization of multiinput-multioutput single hidden-layer feedforward networks (SLFNs) are obtained. The proposed ELMs are developed by innovative decomposition of the recursive orthogonal least squares procedure into sequential partial orthogonalization (SPO). The salient features of the proposed approaches are as follows: 1) Initial hidden nodes are randomly generated by the ELM methodology and recursively orthogonalized into an upper triangular matrix with dramatic reduction in matrix size; 2) the constructive SPO in the CP-ELM focuses on the partial matrix with the subcolumn of the selected regressor including nonzeros as the first column while the destructive SPO in the DP-ELM operates on the partial matrix including elements determined by the removed regressor; 3) termination criteria for CP- and DP-ELM are simplified by the additional residual error reduction method; and 4) the output weights of the SLFN need not be solved in the model selection procedure and is derived from the final upper triangular equation by backward substitution. Both single- and multi-output real-world regression data sets are used to verify the effectiveness and superiority of the CP- and DP-ELM in terms of parsimonious architecture and generalization accuracy. Innovative applications to nonlinear time-series modeling demonstrate superior identification results. PMID:25291736

  14. Bootstrapping least-squares estimates in biochemical reaction networks.

    PubMed

    Linder, Daniel F; Rempała, Grzegorz A

    2015-01-01

    The paper proposes new computational methods of computing confidence bounds for the least-squares estimates (LSEs) of rate constants in mass action biochemical reaction network and stochastic epidemic models. Such LSEs are obtained by fitting the set of deterministic ordinary differential equations (ODEs), corresponding to the large-volume limit of a reaction network, to network's partially observed trajectory treated as a continuous-time, pure jump Markov process. In the large-volume limit the LSEs are asymptotically Gaussian, but their limiting covariance structure is complicated since it is described by a set of nonlinear ODEs which are often ill-conditioned and numerically unstable. The current paper considers two bootstrap Monte-Carlo procedures, based on the diffusion and linear noise approximations for pure jump processes, which allow one to avoid solving the limiting covariance ODEs. The results are illustrated with both in-silico and real data examples from the LINE 1 gene retrotranscription model and compared with those obtained using other methods. PMID:25898769

  15. Non-parametric and least squares Langley plot methods

    NASA Astrophysics Data System (ADS)

    Kiedron, P. W.; Michalsky, J. J.

    2015-04-01

    Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V=V>/i>0e-τ ·m, where a plot of ln (V) voltage vs. m air mass yields a straight line with intercept ln (V0). This ln (V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The eleven techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln (V0)'s are smoothed and interpolated with median and mean moving window filters.

  16. Suppressing Anomalous Localized Waffle Behavior in Least Squares Wavefront Reconstructors

    SciTech Connect

    Gavel, D

    2002-10-08

    A major difficulty with wavefront slope sensors is their insensitivity to certain phase aberration patterns, the classic example being the waffle pattern in the Fried sampling geometry. As the number of degrees of freedom in AO systems grows larger, the possibility of troublesome waffle-like behavior over localized portions of the aperture is becoming evident. Reconstructor matrices have associated with them, either explicitly or implicitly, an orthogonal mode space over which they operate, called the singular mode space. If not properly preconditioned, the reconstructor's mode set can consist almost entirely of modes that each have some localized waffle-like behavior. In this paper we analyze the behavior of least-squares reconstructors with regard to their mode spaces. We introduce a new technique that is successful in producing a mode space that segregates the waffle-like behavior into a few ''high order'' modes, which can then be projected out of the reconstructor matrix. This technique can be adapted so as to remove any specific modes that are undesirable in the final reconstructor (such as piston, tip, and tilt for example) as well as suppress (the more nebulously defined) localized waffle behavior.

  17. Battery state-of-charge estimation using approximate least squares

    NASA Astrophysics Data System (ADS)

    Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.

    2015-03-01

    In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.

  18. Bootstrapping Least Squares Estimates in Biochemical Reaction Networks

    PubMed Central

    Linder, Daniel F.

    2015-01-01

    The paper proposes new computational methods of computing confidence bounds for the least squares estimates (LSEs) of rate constants in mass-action biochemical reaction network and stochastic epidemic models. Such LSEs are obtained by fitting the set of deterministic ordinary differential equations (ODEs), corresponding to the large volume limit of a reaction network, to network’s partially observed trajectory treated as a continuous-time, pure jump Markov process. In the large volume limit the LSEs are asymptotically Gaussian, but their limiting covariance structure is complicated since it is described by a set of nonlinear ODEs which are often ill-conditioned and numerically unstable. The current paper considers two bootstrap Monte-Carlo procedures, based on the diffusion and linear noise approximations for pure jump processes, which allow one to avoid solving the limiting covariance ODEs. The results are illustrated with both in-silico and real data examples from the LINE 1 gene retrotranscription model and compared with those obtained using other methods. PMID:25898769

  19. Curve-skeleton extraction using iterative least squares optimization.

    PubMed

    Wang, Yu-Shuen; Lee, Tong-Yee

    2008-01-01

    A curve skeleton is a compact representation of 3D objects and has numerous applications. It can be used to describe an object's geometry and topology. In this paper, we introduce a novel approach for computing curve skeletons for volumetric representations of the input models. Our algorithm consists of three major steps: 1) using iterative least squares optimization to shrink models and, at the same time, preserving their geometries and topologies, 2) extracting curve skeletons through the thinning algorithm, and 3) pruning unnecessary branches based on shrinking ratios. The proposed method is less sensitive to noise on the surface of models and can generate smoother skeletons. In addition, our shrinking algorithm requires little computation, since the optimization system can be factorized and stored in the pre-computational step. We demonstrate several extracted skeletons that help evaluate our algorithm. We also experimentally compare the proposed method with other well-known methods. Experimental results show advantages when using our method over other techniques. PMID:18467765

  20. A recursive least squares-based demodulator for electrical tomography

    NASA Astrophysics Data System (ADS)

    Xu, Lijun; Zhou, Haili; Cao, Zhang

    2013-04-01

    In this paper, a recursive least squares (RLS)-based demodulator is proposed for Electrical Tomography (ET) that employs sinusoidal excitation. The new demodulator can output preliminary demodulation results on amplitude and phase of a sinusoidal signal by processing the first two sampling data, and the demodulation precision and signal-to-noise ratio can be further improved by involving more sampling data in a recursive way. Thus trade-off between the speed and precision in demodulation of electrical parameters can be flexibly made according to specific requirement of an ET system. The RLS-based demodulator is suitable to be implemented in a field programmable gate array (FPGA). Numerical simulation was carried out to prove its feasibility and optimize the relevant parameters for hardware implementation, e.g., the precision of the fixed-point parameters, sampling rate, and resolution of the analog to digital convertor. A FPGA-based capacitance measurement circuit for electrical capacitance tomography was constructed to implement and validate the RLS-based demodulator. Both simulation and experimental results demonstrate that the proposed demodulator is valid and capable of making trade-off between demodulation speed and precision and brings more flexibility to the hardware design of ET systems.

  1. Fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1986-01-01

    A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.

  2. Non-parametric and least squares Langley plot methods

    NASA Astrophysics Data System (ADS)

    Kiedron, P. W.; Michalsky, J. J.

    2016-01-01

    Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V = V0e-τ ṡ m, where a plot of ln(V) voltage vs. m air mass yields a straight line with intercept ln(V0). This ln(V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The 11 techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln(V0)'s are smoothed and interpolated with median and mean moving window filters.

  3. A duct mapping method using least squares support vector machines

    NASA Astrophysics Data System (ADS)

    Douvenot, RéMi; Fabbro, Vincent; Gerstoft, Peter; Bourlier, Christophe; Saillard, Joseph

    2008-12-01

    This paper introduces a "refractivity from clutter" (RFC) approach with an inversion method based on a pregenerated database. The RFC method exploits the information contained in the radar sea clutter return to estimate the refractive index profile. Whereas initial efforts are based on algorithms giving a good accuracy involving high computational needs, the present method is based on a learning machine algorithm in order to obtain a real-time system. This paper shows the feasibility of a RFC technique based on the least squares support vector machine inversion method by comparing it to a genetic algorithm on simulated and noise-free data, at 1 and 5 GHz. These data are simulated in the presence of ideal trilinear surface-based ducts. The learning machine is based on a pregenerated database computed using Latin hypercube sampling to improve the efficiency of the learning. The results show that little accuracy is lost compared to a genetic algorithm approach. The computational time of a genetic algorithm is very high, whereas the learning machine approach is real time. The advantage of a real-time RFC system is that it could work on several azimuths in near real time.

  4. Robustness of ordinary least squares in randomized clinical trials.

    PubMed

    Judkins, David R; Porter, Kristin E

    2016-05-20

    There has been a series of occasional papers in this journal about semiparametric methods for robust covariate control in the analysis of clinical trials. These methods are fairly easy to apply on currently available computers, but standard software packages do not yet support these methods with easy option selections. Moreover, these methods can be difficult to explain to practitioners who have only a basic statistical education. There is also a somewhat neglected history demonstrating that ordinary least squares (OLS) is very robust to the types of outcome distribution features that have motivated the newer methods for robust covariate control. We review these two strands of literature and report on some new simulations that demonstrate the robustness of OLS to more extreme normality violations than previously explored. The new simulations involve two strongly leptokurtic outcomes: near-zero binary outcomes and zero-inflated gamma outcomes. Potential examples of such outcomes include, respectively, 5-year survival rates for stage IV cancer and healthcare claim amounts for rare conditions. We find that traditional OLS methods work very well down to very small sample sizes for such outcomes. Under some circumstances, OLS with robust standard errors work well with even smaller sample sizes. Given this literature review and our new simulations, we think that most researchers may comfortably continue using standard OLS software, preferably with the robust standard errors. PMID:26694758

  5. Improving the gradient in least-squares reverse time migration

    NASA Astrophysics Data System (ADS)

    Liu, Qiancheng

    2016-04-01

    Least-squares reverse time migration (LSRTM) is a linearized inversion technique used for estimating high-wavenumber reflectivity. However, due to the redundant overlay of the band-limited source wavelet, the gradient based on the cross-correlated imaging principle suffers from a loss of wavenumber information. We first prepare the residuals between observed and demigrated data by deconvolving with the amplitude spectrum of the source wavelet, and then migrate the preprocessed residuals by using the cross-correlation imaging principle. In this way, a gradient that preserves the spectral signature of data residuals is obtained. The computational cost of source-wavelet removal is negligible compared to that of wavefield simulation. The two-dimensional Marmousi model containing complex geology structures is considered to test our scheme. Numerical examples show that our improved gradient in LSRTM has a better convergence behavior and promises inverted results of higher resolution. Finally, we attempt to update the background velocity with our inverted velocity perturbations to approach the true velocity.

  6. Uncertainty analysis of pollutant build-up modelling based on a Bayesian weighted least squares approach.

    PubMed

    Haddad, Khaled; Egodawatta, Prasanna; Rahman, Ataur; Goonetilleke, Ashantha

    2013-04-01

    Reliable pollutant build-up prediction plays a critical role in the accuracy of urban stormwater quality modelling outcomes. However, water quality data collection is resource demanding compared to streamflow data monitoring, where a greater quantity of data is generally available. Consequently, available water quality datasets span only relatively short time scales unlike water quantity data. Therefore, the ability to take due consideration of the variability associated with pollutant processes and natural phenomena is constrained. This in turn gives rise to uncertainty in the modelling outcomes as research has shown that pollutant loadings on catchment surfaces and rainfall within an area can vary considerably over space and time scales. Therefore, the assessment of model uncertainty is an essential element of informed decision making in urban stormwater management. This paper presents the application of a range of regression approaches such as ordinary least squares regression, weighted least squares regression and Bayesian weighted least squares regression for the estimation of uncertainty associated with pollutant build-up prediction using limited datasets. The study outcomes confirmed that the use of ordinary least squares regression with fixed model inputs and limited observational data may not provide realistic estimates. The stochastic nature of the dependent and independent variables need to be taken into consideration in pollutant build-up prediction. It was found that the use of the Bayesian approach along with the Monte Carlo simulation technique provides a powerful tool, which attempts to make the best use of the available knowledge in prediction and thereby presents a practical solution to counteract the limitations which are otherwise imposed on water quality modelling. PMID:23454702

  7. Fast Dating Using Least-Squares Criteria and Algorithms

    PubMed Central

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley–Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley–Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to

  8. Fast Dating Using Least-Squares Criteria and Algorithms.

    PubMed

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that

  9. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  10. Finding A Minimally Informative Dirichlet Prior Using Least Squares

    SciTech Connect

    Dana Kelly

    2011-03-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson \\lambda, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  11. Least-squares electromagnetic analysis of thin dielectrics using surface equivalence

    NASA Astrophysics Data System (ADS)

    Shieh, Kuen-Wey

    2000-10-01

    In this thesis, the motivation was to study the applicability and test the limits of analytical formulations using surface equivalence, in dealing with the scattering problem of a thin dielectric slab of finite extent. In this application of the surface equivalence principle, the unknowns, equivalent surface electric and magnetic currents, are established using the method of moments. Described herein, in order to solve for the unknowns, are four new numerical techniques called LSM, CLSM, CLSM+RCA and CWLSM+RCA, employed to deal with the radar cross section (RCS) of electromagnetic wave scattering from thin dielectric slabs, for different thicknesses in three dimensions. The designations, LSM, CLSM, CLSM+RCA and CWLSM+RCA stand for least squares method, constrained least squares method, constrained least squares method plus ring current approximation and constrained weighted least squares method plus ring current approximation, respectively. The least squares method is utilized in the new numerical techniques, providing a better solution in the null region of the RCS than the combined field integral equation (CFIE). The new numerical techniques employ surface distributions of equivalent currents, thus in principle requiring less computer memory than those employing volume distributions of current density. Moreover, there is no need to worry about how nearly perfect should be the absorbing boundary condition (ABC) that is used in the finite difference time domain technique (FDTD). Further, in this work, the importance of the equivalent surface currents flowing on the edge of a thin slab (which are referred to as `ring currents') has been identified. The new techniques also show fast convergence for the particularly challenging case of edge-on wave incidence, even when the slab is as thin as 0.001 λ0 (λ0 is wavelength in free space). In particular, the CLSM+RCA and CWLSM+RCA analyses have been validated by experiments for the case of backward RCS, these experiments

  12. Frequency domain analysis and synthesis of lumped parameter systems using nonlinear least squares techniques

    NASA Technical Reports Server (NTRS)

    Hays, J. R.

    1969-01-01

    Lumped parametric system models are simplified and computationally advantageous in the frequency domain of linear systems. Nonlinear least squares computer program finds the least square best estimate for any number of parameters in an arbitrarily complicated model.

  13. Compressible seal flow analysis using the finite element method with Galerkin solution technique

    NASA Technical Reports Server (NTRS)

    Zuk, J.

    1974-01-01

    A finite element method with a Galerkin solution (FEMGS) technique is formulated for the solution of nonlinear problems in high-pressure compressible seal flow analyses. An example of a three-dimensional axisymmetric flow having nonlinearities, due to compressibility, area expansion, and convective inertia, is used for illustrating the application of the technique.

  14. Interval analysis approach to rank determination in linear least squares problems

    SciTech Connect

    Manteuffel, T.A.

    1980-06-01

    The linear least-squares problem Ax approx. = b has a unique solution only if the matrix A has full column rank. Numerical rank determination is difficult, especially in the presence of uncertainties in the elements of A. This paper proposes an interval analysis approach. A set of matrices A/sup I/ is defined that contains all possible perturbations of A due to uncertainties; A/sup I/ is said to be rank deficient if any member of A/sup I/ is rank deficient. A modification to the Q-R decomposition method of solution of the least-squares problem allows a determination of the rank of A/sup I/ and a partial interval analysis of the solution vector x. This procedure requires the computation of R/sup -1/. Another modification is proposed which determines the rank of A/sup I/ without computing R/sup -1/. The additional computational effort is O(N/sup 2/), where N is the column dimension of A. 4 figures.

  15. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-01-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  16. Stable least-squares matching for oblique images using bound constrained optimization and a robust loss function

    NASA Astrophysics Data System (ADS)

    Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Xie, Linfu; Chen, Min

    2016-08-01

    Least-squares matching is a standard procedure in photogrammetric applications for obtaining sub-pixel accuracies of image correspondences. However, least-squares matching has also been criticized for its instability, which is primarily reflected by the requests for the initial correspondence and favorable image quality. In image matching between oblique images, due to the blur, illumination differences and other effects, the image attributes of different views are notably different, which results in a more severe convergence problem. Aiming at improving the convergence rate and robustness of least-squares matching of oblique images, we incorporated prior geometric knowledge in the optimization process, which is reflected as the bounded constraints on the optimizing parameters that constrain the search for a solution to a reasonable region. Furthermore, to be resilient to outliers, we substituted the square loss with a robust loss function. To solve the composite problem, we reformulated the least-squares matching problem as a bound constrained optimization problem, which can be solved with bounds constrained Levenberg-Marquardt solver. Experimental results consisting of images from two different penta-view oblique camera systems confirmed that the proposed method shows guaranteed final convergences in various scenarios compared to the approximately 20-50% convergence rate of classical least-squares matching.

  17. Voronoi based discrete least squares meshless method for heat conduction simulation in highly irregular geometries

    NASA Astrophysics Data System (ADS)

    Labibzadeh, Mojtaba

    2016-01-01

    A new technique is used in Discrete Least Square Meshfree(DLSM) method to remove the common existing deficiencies of meshfree methods in handling of the problems containing cracks or concave boundaries. An enhanced Discrete Least Squares Meshless method named as VDLSM(Voronoi based Discrete Least Squares Meshless) is developed in order to solve the steady-state heat conduction problem in irregular solid domains including concave boundaries or cracks. Existing meshless methods cannot estimate precisely the required unknowns in the vicinity of the above mentioned boundaries. Conducted researches are limited to domains with regular convex boundaries. To this end, the advantages of the Voronoi tessellation algorithm are implemented. The support domains of the sampling points are determined using a Voronoi tessellation algorithm. For the weight functions, a cubic spline polynomial is used based on a normalized distance variable which can provide a high degree of smoothness near those mentioned above discontinuities. Finally, Moving Least Squares(MLS) shape functions are constructed using a varitional method. This straight-forward scheme can properly estimate the unknowns(in this particular study, the temperatures at the nodal points) near and on the crack faces, crack tip or concave boundaries without need to extra backward corrective procedures, i.e. the iterative calculations for modifying the shape functions of the nodes located near or on these types of the complex boundaries. The accuracy and efficiency of the presented method are investigated by analyzing four particular examples. Obtained results from VDLSM are compared with the available analytical results or with the results of the well-known Finite Elements Method(FEM) when an analytical solution is not available. By comparisons, it is revealed that the proposed technique gives high accuracy for the solution of the steady-state heat conduction problems within cracked domains or domains with concave boundaries

  18. A hybrid least squares and principal component analysis algorithm for Raman spectroscopy.

    PubMed

    Van de Sompel, Dominique; Garai, Ellis; Zavaleta, Cristina; Gambhir, Sanjiv Sam

    2012-01-01

    Raman spectroscopy is a powerful technique for detecting and quantifying analytes in chemical mixtures. A critical part of Raman spectroscopy is the use of a computer algorithm to analyze the measured Raman spectra. The most commonly used algorithm is the classical least squares method, which is popular due to its speed and ease of implementation. However, it is sensitive to inaccuracies or variations in the reference spectra of the analytes (compounds of interest) and the background. Many algorithms, primarily multivariate calibration methods, have been proposed that increase robustness to such variations. In this study, we propose a novel method that improves robustness even further by explicitly modeling variations in both the background and analyte signals. More specifically, it extends the classical least squares model by allowing the declared reference spectra to vary in accordance with the principal components obtained from training sets of spectra measured in prior characterization experiments. The amount of variation allowed is constrained by the eigenvalues of this principal component analysis. We compare the novel algorithm to the least squares method with a low-order polynomial residual model, as well as a state-of-the-art hybrid linear analysis method. The latter is a multivariate calibration method designed specifically to improve robustness to background variability in cases where training spectra of the background, as well as the mean spectrum of the analyte, are available. We demonstrate the novel algorithm's superior performance by comparing quantitative error metrics generated by each method. The experiments consider both simulated data and experimental data acquired from in vitro solutions of Raman-enhanced gold-silica nanoparticles. PMID:22723895

  19. Resolution of five-component mixture using mean centering ratio and inverse least squares chemometrics

    PubMed Central

    2013-01-01

    Background A comparative study of the use of mean centering of ratio spectra and inverse least squares for the resolution of paracetamol, methylparaben, propylparaben, chlorpheniramine maleate and pseudoephedrine hydrochloride has been achieved showing that the two chemometric methods provide a good example of the high resolving power of these techniques. Method (I) is the mean centering of ratio spectra which depends on using the mean centered ratio spectra in four successive steps that eliminates the derivative steps and therefore the signal to noise ratio is improved. The absorption spectra of prepared solutions were measured in the range of 220–280 nm. Method (II) is based on the inverse least squares that depend on updating developed multivariate calibration model. The absorption spectra of the prepared mixtures in the range 230–270 nm were recorded. Results The linear concentration ranges were 0–25.6, 0–15.0, 0–15.0, 0–45.0 and 0–100.0 μg mL-1 for paracetamol, methylparaben, propylparaben, chlorpheniramine maleate and pseudoephedrine hydrochloride, respectively. The mean recoveries for simultaneous determination were between 99.9-101.3% for the two methods. The two developed methods have been successfully used for prediction of five-component mixture in Decamol Flu syrup with good selectivity, high sensitivity and extremely low detection limit. Conclusion No published method has been reported for simultaneous determination of the five components of this mixture so that the results of the mean centering of ratio spectra method were compared with those of the proposed inverse least squares method. Statistical comparison was performed using t-test and F-ratio at P = 0.05. There was no significant difference between the results. PMID:24028626

  20. Software for the parallel adaptive solution of conservation laws by discontinous Galerkin methods.

    SciTech Connect

    Flaherty, J. E.; Loy, R. M.; Shephard, M. S.; Teresco, J. D.

    1999-08-17

    The authors develop software tools for the solution of conservation laws using parallel adaptive discontinuous Galerkin methods. In particular, the Rensselaer Partition Model (RPM) provides parallel mesh structures within an adaptive framework to solve the Euler equations of compressible flow by a discontinuous Galerkin method (LOCO). Results are presented for a Rayleigh-Taylor flow instability for computations performed on 128 processors of an IBM SP computer. In addition to managing the distributed data and maintaining a load balance, RPM provides information about the parallel environment that can be used to tailor partitions to a specific computational environment.

  1. Evaluation of TDRSS-user orbit determination accuracy using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Hodjatzadeh, M.; Samii, M. V.; Doll, C. E.; Hart, R. C.; Mistretta, G. D.

    1991-01-01

    The development of the Real-Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination on a Disk Operating System (DOS) based Personal Computer (PC) is addressed. The results of a study to compare the orbit determination accuracy of a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOD/E with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), is addressed. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for the Earth Radiation Budget Satellite (ERBS); the maximum solution differences were less than 25 m after the filter had reached steady state.

  2. Simultaneous spectrophotometric determination of four metals by two kinds of partial least squares methods

    NASA Astrophysics Data System (ADS)

    Gao, Ling; Ren, Shouxin

    2005-10-01

    Simultaneous determination of Ni(II), Cd(II), Cu(II) and Zn(II) was studied by two methods, kernel partial least squares (KPLS) and wavelet packet transform partial least squares (WPTPLS), with xylenol orange and cetyltrimethyl ammonium bromide as reagents in the medium pH = 9.22 borax-hydrochloric acid buffer solution. Two programs, PKPLS and PWPTPLS, were designed to perform the calculations. Data reduction was performed using kernel matrices and wavelet packet transform, respectively. In the KPLS method, the size of the kernel matrix is only dependent on the number of samples, thus the method was suitable for the data matrix with many wavelengths and fewer samples. Wavelet packet representations of signals provide a local time-frequency description, thus in the wavelet packet domain, the quality of the noise removal can be improved. In the WPTPLS by optimization, wavelet function and decomposition level were selected as Daubeches 12 and 5, respectively. Experimental results showed both methods to be successful even where there was severe overlap of spectra.

  3. On the decoding of intracranial data using sparse orthonormalized partial least squares

    NASA Astrophysics Data System (ADS)

    van Gerven, Marcel A. J.; Chao, Zenas C.; Heskes, Tom

    2012-04-01

    It has recently been shown that robust decoding of motor output from electrocorticogram signals in monkeys over prolonged periods of time has become feasible (Chao et al 2010 Front. Neuroeng. 3 1-10 ). In order to achieve these results, multivariate partial least-squares (PLS) regression was used. PLS uses a set of latent variables, referred to as components, to model the relationship between the input and the output data and is known to handle high-dimensional and possibly strongly correlated inputs and outputs well. We developed a new decoding method called sparse orthonormalized partial least squares (SOPLS) which was tested on a subset of the data used in Chao et al (2010) (freely obtainable from neurotycho.org (Nagasaka et al 2011 PLoS ONE 6 e22561)). We show that SOPLS reaches the same decoding performance as PLS using just two sparse components which can each be interpreted as encoding particular combinations of motor parameters. Furthermore, the sparse solution afforded by the SOPLS model allowed us to show the functional involvement of beta and gamma band responses in premotor and motor cortex for predicting the first component. Based on the literature, we conjecture that this first component is involved in the encoding of movement direction. Hence, the sparse and compact representation afforded by the SOPLS model facilitates interpretation of which spectral, spatial and temporal components are involved in successful decoding. These advantages make the proposed decoding method an important new tool in neuroprosthetics.

  4. Least Squares Shadowing Sensitivity Analysis of Chaotic and Turbulent Fluid Flows

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Wang, Qiqi; Gomez, Steven

    2013-11-01

    Computational methods for sensitivity analysis are invaluable tools for fluid dynamics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in chaotic fluid flow fields, such as those obtained using high-fidelity turbulence simulations. This break down is due to the ``Butterfly Effect'' the high sensitivity of chaotic dynamical systems to the initial condition. A new sensitivity analysis method developed by the authors, Least Squares Shadowing (LSS), can compute useful and accurate gradients for quantities of interest in chaotic and turbulent fluid flows. LSS computes gradients using the ``shadow trajectory,'' a phase space trajectory (or solution) for which perturbations to the flow field do not grow exponentially in time. This talk will outline Least Squares Shadowing and demonstrate it on several chaotic and turbulent fluid flows, including homogeneous isotropic turbulence, Rayleigh-Bénard convection and turbulent channel flow. We would like to acknowledge AFSOR Award F11B-T06-0007 under Dr. Fariba Fahroo, NASA Award NNH11ZEA001N under Dr. Harold Atkins, as well as financial support from ConocoPhillips, the NDSEG fellowship and the ANSYS Fellowship.

  5. Using Perturbed QR Factorizations To Solve Linear Least-Squares Problems

    SciTech Connect

    Avron, Haim; Ng, Esmond G.; Toledo, Sivan

    2008-03-21

    We propose and analyze a new tool to help solve sparse linear least-squares problems min{sub x} {parallel}Ax-b{parallel}{sub 2}. Our method is based on a sparse QR factorization of a low-rank perturbation {cflx A} of A. More precisely, we show that the R factor of {cflx A} is an effective preconditioner for the least-squares problem min{sub x} {parallel}Ax-b{parallel}{sub 2}, when solved using LSQR. We propose applications for the new technique. When A is rank deficient we can add rows to ensure that the preconditioner is well-conditioned without column pivoting. When A is sparse except for a few dense rows we can drop these dense rows from A to obtain {cflx A}. Another application is solving an updated or downdated problem. If R is a good preconditioner for the original problem A, it is a good preconditioner for the updated/downdated problem {cflx A}. We can also solve what-if scenarios, where we want to find the solution if a column of the original matrix is changed/removed. We present a spectral theory that analyzes the generalized spectrum of the pencil (A*A,R*R) and analyze the applications.

  6. FOSLS (first-order systems least squares): An overivew

    SciTech Connect

    Manteuffel, T.A.

    1996-12-31

    The process of modeling a physical system involves creating a mathematical model, forming a discrete approximation, and solving the resulting linear or nonlinear system. The mathematical model may take many forms. The particular form chosen may greatly influence the ease and accuracy with which it may be discretized as well as the properties of the resulting linear or nonlinear system. If a model is chosen incorrectly it may yield linear systems with undesirable properties such as nonsymmetry or indefiniteness. On the other hand, if the model is designed with the discretization process and numerical solution in mind, it may be possible to avoid these undesirable properties.

  7. Two new methods for solving large scale least squares in geodetic surveying computations

    NASA Astrophysics Data System (ADS)

    Murigande, Ch.; Toint, Ph. L.; Paquet, P.

    1986-12-01

    This paper considers the solution of linear least squares problems arising in space geodesy, with a special application to multistation adjustment by a short arc method based on Doppler observations. The widely used second-order regression algorithm due to Brown (1976) for reducing the normal equations system is briefly recalled. Then two algorithms which avoid the use of the normal equations are proposed. The first one is a direct method that applies orthogonal transformations to the observation matrix directly, in order to reduce it to upper triangular form. The solution is then obtained by back-substitution. The second method is iterative and uses a preconditioned conjugate gradient technique. A comparison of the three procedures is provided on data of the second European Doppler Observation Campaign.

  8. Algorithm 937: MINRES-QLP for Symmetric and Hermitian Linear Equations and Least-Squares Problems.

    PubMed

    Choi, Sou-Cheng T; Saunders, Michael A

    2014-02-01

    We describe algorithm MINRES-QLP and its FORTRAN 90 implementation for solving symmetric or Hermitian linear systems or least-squares problems. If the system is singular, MINRES-QLP computes the unique minimum-length solution (also known as the pseudoinverse solution), which generally eludes MINRES. In all cases, it overcomes a potential instability in the original MINRES algorithm. A positive-definite pre-conditioner may be supplied. Our FORTRAN 90 implementation illustrates a design pattern that allows users to make problem data known to the solver but hidden and secure from other program units. In particular, we circumvent the need for reverse communication. Example test programs input and solve real or complex problems specified in Matrix Market format. While we focus here on a FORTRAN 90 implementation, we also provide and maintain MATLAB versions of MINRES and MINRES-QLP. PMID:25328255

  9. First-order system least squares for the pure traction problem in planar linear elasticity

    SciTech Connect

    Cai, Z.; Manteuffel, T.; McCormick, S.; Parter, S.

    1996-12-31

    This talk will develop two first-order system least squares (FOSLS) approaches for the solution of the pure traction problem in planar linear elasticity. Both are two-stage algorithms that first solve for the gradients of displacement, then for the displacement itself. One approach, which uses L{sup 2} norms to define the FOSLS functional, is shown under certain H{sup 2} regularity assumptions to admit optimal H{sup 1}-like performance for standard finite element discretization and standard multigrid solution methods that is uniform in the Poisson ratio for all variables. The second approach, which is based on H{sup -1} norms, is shown under general assumptions to admit optimal uniform performance for displacement flux in an L{sup 2} norm and for displacement in an H{sup 1} norm. These methods do not degrade as other methods generally do when the material properties approach the incompressible limit.

  10. Constrained least-squares estimation in deconvolution from wave-front sensing

    NASA Astrophysics Data System (ADS)

    Ford, S. D.; Welsh, B. M.; Roggemann, M. C.

    1998-05-01

    We address the optimal processing of astronomical images using the deconvolution from wave-front sensing technique (DWFS). A constrained least-squares (CLS) solution which incorporates ensemble average DWFS data is derived using Lagrange minimization. The new estimator requires DWFS data, noise statistics, OTF statistics, and a constraint. The constraint can be chosen such that the algorithm selects a conventional regularization constant automatically. No ad hoc parameter tuning is necessary. The algorithm uses an iterative Newton-Raphson minimization to determine the optimal Lagrange multiplier. Computer simulation of a 1 m telescope imaging through atmospheric turbulence is used to test the estimation scheme. CLS object estimates are compared with those processed via manual tuning of the regularization constant. The CLS algorithm provides images with comparable resolution and is computationally inexpensive, converging to a solution in less than 10 iterations.

  11. Algorithm 937: MINRES-QLP for Symmetric and Hermitian Linear Equations and Least-Squares Problems

    PubMed Central

    Choi, Sou-Cheng T.; Saunders, Michael A.

    2014-01-01

    We describe algorithm MINRES-QLP and its FORTRAN 90 implementation for solving symmetric or Hermitian linear systems or least-squares problems. If the system is singular, MINRES-QLP computes the unique minimum-length solution (also known as the pseudoinverse solution), which generally eludes MINRES. In all cases, it overcomes a potential instability in the original MINRES algorithm. A positive-definite pre-conditioner may be supplied. Our FORTRAN 90 implementation illustrates a design pattern that allows users to make problem data known to the solver but hidden and secure from other program units. In particular, we circumvent the need for reverse communication. Example test programs input and solve real or complex problems specified in Matrix Market format. While we focus here on a FORTRAN 90 implementation, we also provide and maintain MATLAB versions of MINRES and MINRES-QLP. PMID:25328255

  12. Partial least squares, conjugate gradient and the fisher discriminant

    SciTech Connect

    Faber, V.

    1996-12-31

    The theory of multivariate regression has been extensively studied and is commonly used in many diverse scientific areas. A wide variety of techniques are currently available for solving the problem of multivariate calibration. The volume of literature on this subject is so extensive that understanding which technique to apply can often be very confusing. A common class of techniques for solving linear systems, and consequently applications of linear systems to multivariate analysis, are iterative methods. While common linear system solvers typically involve the factorization of the coefficient matrix A in solving the system Ax = b, this method can be impractical if A is large and sparse. Iterative methods such as Gauss-Seidel, SOR, Chebyshev semi-iterative, and related methods also often depend upon parameters that require calibration and which are sometimes hard to choose properly. An iterative method which surmounts many of these difficulties is the method of conjugate gradient. Algorithms of this type find solutions iteratively, by optimally calculating the next approximation from the residuals.

  13. Meshless Local Petrov-Galerkin Method for Bending Problems

    NASA Technical Reports Server (NTRS)

    Phillips, Dawn R.; Raju, Ivatury S.

    2002-01-01

    Recent literature shows extensive research work on meshless or element-free methods as alternatives to the versatile Finite Element Method. One such meshless method is the Meshless Local Petrov-Galerkin (MLPG) method. In this report, the method is developed for bending of beams - C1 problems. A generalized moving least squares (GMLS) interpolation is used to construct the trial functions, and spline and power weight functions are used as the test functions. The method is applied to problems for which exact solutions are available to evaluate its effectiveness. The accuracy of the method is demonstrated for problems with load discontinuities and continuous beam problems. A Petrov-Galerkin implementation of the method is shown to greatly reduce computational time and effort and is thus preferable over the previously developed Galerkin approach. The MLPG method for beam problems yields very accurate deflections and slopes and continuous moment and shear forces without the need for elaborate post-processing techniques.

  14. The program LOPT for least-squares optimization of energy levels

    NASA Astrophysics Data System (ADS)

    Kramida, A. E.

    2011-02-01

    The article describes a program that solves the least-squares optimization problem for finding the energy levels of a quantum-mechanical system based on a set of measured energy separations or wavelengths of transitions between those energy levels, as well as determining the Ritz wavelengths of transitions and their uncertainties. The energy levels are determined by solving the matrix equation of the problem, and the uncertainties of the Ritz wavenumbers are determined from the covariance matrix of the problem. Program summaryProgram title: LOPT Catalogue identifier: AEHM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 19 254 No. of bytes in distributed program, including test data, etc.: 427 839 Distribution format: tar.gz Programming language: Perl v.5 Computer: PC, Mac, Unix workstations Operating system: MS Windows (XP, Vista, 7), Mac OS X, Linux, Unix (AIX) RAM: 3 Mwords or more Word size: 32 or 64 Classification: 2.2 Nature of problem: The least-squares energy-level optimization problem, i.e., finding a set of energy level values that best fits the given set of transition intervals. Solution method: The solution of the least-squares problem is found by solving the corresponding linear matrix equation, where the matrix is constructed using a new method with variable substitution. Restrictions: A practical limitation on the size of the problem N is imposed by the execution time, which scales as N and depends on the computer. Unusual features: Properly rounds the resulting data and formats the output in a format suitable for viewing with spreadsheet editing software. Estimates numerical errors resulting from the limited machine precision. Running time: 1 s for N=100, or 60 s for N=400 on a typical PC.

  15. Parameter estimation in PS-InSAR deformation studies using integer least-squares techniques

    NASA Astrophysics Data System (ADS)

    Hanssen, R. F.; Ferretti, A.

    2002-12-01

    Interferometric synthetic aperture radar (InSAR) methods are increasingly used for measuring deformations of the earth's surface. Unfortunately, in many cases the problem of temporal decorrelation hampers successful measurements over longer time intervals. The permanent scatterers approach (PS-InSAR) for processing time series of SAR interferograms proves to be a good alternative by recognizing and analyzing single scatterers with a reliable phase behavior in time. Ambiguity resolution or phase unwrapping is the process of resolving the unknown cycle ambiguities in the radar data, and is one of the main problems in InSAR data analysis. In a single interferogram, the problem of phase unwrapping and parameter estimation is usually solved for in separate consecutive computations. It is often assumed that the final result of the phase unwrapping is a deterministic signal, used as input for the parameter estimation, e.g. elevation and deformation. As a result, errors in the ambiguity resolution are usually not propagated into the final results, which can lead to a serious underestimation of errors in the parameters and consequently in the geophysical models which use these parameters. In fact, however, the resolved phase ambiguities are stochastic as well, even though they are described with a probability mass function in stead of a probability density function. In this paper, the integer least-squares technique for integrated ambiguity resolution and parameter estimation is applied to PS-InSAR data analysis, using a three-step procedure. First, a standard least-squares adjustment is performed, assuming the ambiguities are float parameters, leading to the real-valued 'float'-solution. Second, the ambiguities are resolved using the float ambiguity estimates. Third, if the second step was successful, the integer estimates are used to correct the float solution estimate. It has been proved that the integer least-squares estimator is an optimal method in the sense that it

  16. Concerning an application of the method of least squares with a variable weight matrix

    NASA Technical Reports Server (NTRS)

    Sukhanov, A. A.

    1979-01-01

    An estimate of a state vector for a physical system when the weight matrix in the method of least squares is a function of this vector is considered. An iterative procedure is proposed for calculating the desired estimate. Conditions for the existence and uniqueness of the limit of this procedure are obtained, and a domain is found which contains the limit estimate. A second method for calculating the desired estimate which reduces to the solution of a system of algebraic equations is proposed. The question of applying Newton's method of tangents to solving the given system of algebraic equations is considered and conditions for the convergence of the modified Newton's method are obtained. Certain properties of the estimate obtained are presented together with an example.

  17. The derivation of vector magnetic fields from Stokes profiles - Integral versus least squares fitting techniques

    NASA Technical Reports Server (NTRS)

    Ronan, R. S.; Mickey, D. L.; Orrall, F. Q.

    1987-01-01

    The results of two methods for deriving photospheric vector magnetic fields from the Zeeman effect, as observed in the Fe I line at 6302.5 A at high spectral resolution (45 mA), are compared. The first method does not take magnetooptical effects into account, but determines the vector magnetic field from the integral properties of the Stokes profiles. The second method is an iterative least-squares fitting technique which fits the observed Stokes profiles to the profiles predicted by the Unno-Rachkovsky solution to the radiative transfer equation. For sunspot fields above about 1500 gauss, the two methods are found to agree in derived azimuthal and inclination angles to within about + or - 20 deg.

  18. A Least-Squares Finite Element Method for Electromagnetic Scattering Problems

    NASA Technical Reports Server (NTRS)

    Wu, Jie; Jiang, Bo-nan

    1996-01-01

    The least-squares finite element method (LSFEM) is applied to electromagnetic scattering and radar cross section (RCS) calculations. In contrast to most existing numerical approaches, in which divergence-free constraints are omitted, the LSFF-M directly incorporates two divergence equations in the discretization process. The importance of including the divergence equations is demonstrated by showing that otherwise spurious solutions with large divergence occur near the scatterers. The LSFEM is based on unstructured grids and possesses full flexibility in handling complex geometry and local refinement Moreover, the LSFEM does not require any special handling, such as upwinding, staggered grids, artificial dissipation, flux-differencing, etc. Implicit time discretization is used and the scheme is unconditionally stable. By using a matrix-free iterative method, the computational cost and memory requirement for the present scheme is competitive with other approaches. The accuracy of the LSFEM is verified by several benchmark test problems.

  19. Sequential Least-Squares Using Orthogonal Transformations. [spacecraft communication/spacecraft tracking-data smoothing

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.

    1975-01-01

    Square root information estimation, starting from its beginnings in least-squares parameter estimation, is considered. Special attention is devoted to discussions of sensitivity and perturbation matrices, computed solutions and their formal statistics, consider-parameters and consider-covariances, and the effects of a priori statistics. The constant-parameter model is extended to include time-varying parameters and process noise, and the error analysis capabilities are generalized. Efficient and elegant smoothing results are obtained as easy consequences of the filter formulation. The value of the techniques is demonstrated by the navigation results that were obtained for the Mariner Venus-Mercury (Mariner 10) multiple-planetary space probe and for the Viking Mars space mission.

  20. Temporal gravity field modeling based on least square collocation with short-arc approach

    NASA Astrophysics Data System (ADS)

    ran, jiangjun; Zhong, Min; Xu, Houze; Liu, Chengshu; Tangdamrongsub, Natthachet

    2014-05-01

    After the launch of the Gravity Recovery And Climate Experiment (GRACE) in 2002, several research centers have attempted to produce the finest gravity model based on different approaches. In this study, we present an alternative approach to derive the Earth's gravity field, and two main objectives are discussed. Firstly, we seek the optimal method to estimate the accelerometer parameters, and secondly, we intend to recover the monthly gravity model based on least square collocation method. The method has been paid less attention compared to the least square adjustment method because of the massive computational resource's requirement. The positions of twin satellites are treated as pseudo-observations and unknown parameters at the same time. The variance covariance matrices of the pseudo-observations and the unknown parameters are valuable information to improve the accuracy of the estimated gravity solutions. Our analyses showed that introducing a drift parameter as an additional accelerometer parameter, compared to using only a bias parameter, leads to a significant improvement of our estimated monthly gravity field. The gravity errors outside the continents are significantly reduced based on the selected set of the accelerometer parameters. We introduced the improved gravity model namely the second version of Institute of Geodesy and Geophysics, Chinese Academy of Sciences (IGG-CAS 02). The accuracy of IGG-CAS 02 model is comparable to the gravity solutions computed from the Geoforschungszentrum (GFZ), the Center for Space Research (CSR) and the NASA Jet Propulsion Laboratory (JPL). In term of the equivalent water height, the correlation coefficients over the study regions (the Yangtze River valley, the Sahara desert, and the Amazon) among four gravity models are greater than 0.80.

  1. Assessing Fit and Dimensionality in Least Squares Metric Multidimensional Scaling Using Akaike's Information Criterion

    ERIC Educational Resources Information Center

    Ding, Cody S.; Davison, Mark L.

    2010-01-01

    Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…

  2. On the Significance of Properly Weighting Sorption Data for Least Squares Analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...

  3. UNIPALS: SOFTWARE FOR PRINCIPAL COMPONENTS ANALYSIS AND PARTIAL LEAST SQUARES REGRESSION

    EPA Science Inventory

    Software for the analysis of multivariate chemical data by principal components and partial least squares methods is included on disk. he methods extract latent variables from the chemical data with the UNIversal PArtial Least Squares or UNIPALS algorithm. he software is written ...

  4. First-Order System Least-Squares for Second-Order Elliptic Problems with Discontinuous Coefficients

    NASA Technical Reports Server (NTRS)

    Manteuffel, Thomas A.; McCormick, Stephen F.; Starke, Gerhard

    1996-01-01

    The first-order system least-squares methodology represents an alternative to standard mixed finite element methods. Among its advantages is the fact that the finite element spaces approximating the pressure and flux variables are not restricted by the inf-sup condition and that the least-squares functional itself serves as an appropriate error measure. This paper studies the first-order system least-squares approach for scalar second-order elliptic boundary value problems with discontinuous coefficients. Ellipticity of an appropriately scaled least-squares bilinear form of the size of the jumps in the coefficients leading to adequate finite element approximation results. The occurrence of singularities at interface corners and cross-points is discussed. and a weighted least-squares functional is introduced to handle such cases. Numerical experiments are presented for two test problems to illustrate the performance of this approach.

  5. Fast integer least squares estimation methods: applications-oriented review and improvement

    NASA Astrophysics Data System (ADS)

    Xu, Peiliang

    2013-04-01

    The integer least squares (ILS) problem, also known as the weighted closest point problem, is highly interdisciplinary, but no algorithms can find its global optimal integer solution in polynomial time. In this talk, we will review fast algorithms for estimation of integer parameters. First, we will outline two suboptimal integer solutions, which can be important either in real time communication systems or to solve high dimensional GPS integer ambiguity unknowns. We then focus on the most efficient algorithms to search for the exact integer solution, which is shown to be much faster than LAMBDA in the sense that the ratio of integer candidates to be checked by efficient algorithms to those by LAMBDA can be theoretically expressed by rm, where r < 1 and m is the number of integer unknowns. Finally, we further improve the searching efficiency of the most powerful combined algorithms by implementing two sorting strategies, which can either be used for finding the exact integer solution or for constructing a suboptimal integer solution. A test example clearly demonstrates that the improved methods can perform significantly better than the most powerful combined algorithm to simultaneously find the optimal and second optimal integer solutions. More mathematical and algorithmic details of this talk can be found in Xu (2001, J Geod, 75, 408-423); Xu (2006, IEEE Trans Information Theory, 52, 3122-3138); Xu (2012, J Geod, 86, 35-52) and Xu et al. (2012, Survey Review, 44, 59-71).

  6. Comparison and Analysis of Nonlinear Least Squares Methods for Vision Based Navigation (vbn) Algorithms

    NASA Astrophysics Data System (ADS)

    Sheta, B.; Elhabiby, M.; Sheimy, N.

    2012-07-01

    succeeded in converging to the relative optimal solution of the georeferencing parameters. In trust region methods, the number of iterations was more than Levenberg-Marquardt because of the necessity for evaluating the local minimum to ensure if it is the global one or not in each iteration step. As for the Levenberg-Marquardt method, which is considered as a modified Gauss-Newton algorithm employing the trust region approach where a scalar is introduced to assess the choice of the magnitude and the direction of the descent. This scalar determines whether the Gauss-Newton method direction or the steepest descent method direction will be used as an adaptive approach for both linear and non-linear mathematical models and it successfully converged and achieved the relative optimum solution. These five methods results are compared explicitly to the linear traditional least-squares approach, with detailed statistical analysis of the results, with emphasis on the UAV (VBN) applications.

  7. New prediction-augmented classical least squares (PACLS) methods: Application to unmodeled interferents

    SciTech Connect

    HAALAND,DAVID M.; MELGAARD,DAVID K.

    2000-01-26

    A significant improvement to the classical least squares (CLS) multivariate analysis method has been developed. The new method, called prediction-augmented classical least squares (PACLS), removes the restriction for CLS that all interfering spectral species must be known and their concentrations included during the calibration. The authors demonstrate that PACLS can correct inadequate CLS models if spectral components left out of the calibration can be identified and if their spectral shapes can be derived and added during a PACLS prediction step. The new PACLS method is demonstrated for a system of dilute aqueous solutions containing urea, creatinine, and NaCl analytes with and without temperature variations. The authors demonstrate that if CLS calibrations are performed using only a single analyte's concentration, then there is little, if any, prediction ability. However, if pure-component spectra of analytes left out of the calibration are independently obtained and added during PACLS prediction, then the CLS prediction ability is corrected and predictions become comparable to that of a CLS calibration that contains all analyte concentrations. It is also demonstrated that constant-temperature CLS models can be used to predict variable-temperature data by employing the PACLS method augmented by the spectral shape of a temperature change of the water solvent. In this case, PACLS can also be used to predict sample temperature with a standard error of prediction of 0.07 C even though the calibration data did not contain temperature variations. The PACLS method is also shown to be capable of modeling system drift to maintain a calibration in the presence of spectrometer drift.

  8. Least-squares dual characterization for ROI assessment in emission tomography

    NASA Astrophysics Data System (ADS)

    Ben Bouallègue, F.; Crouzet, J. F.; Dubois, A.; Buvat, I.; Mariano-Goulart, D.

    2013-06-01

    Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.

  9. A Discontinuous Petrov-Galerkin Methodology for Adaptive Solutions to the Incompressible Navier-Stokes Equations

    SciTech Connect

    Roberts, Nathan V.; Demkowiz, Leszek; Moser, Robert

    2015-11-15

    The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18, 20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates—the inf-sup constants governing the convergence are mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.

  10. A discontinuous Petrov-Galerkin methodology for adaptive solutions to the incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Roberts, Nathan V.; Demkowicz, Leszek; Moser, Robert

    2015-11-01

    The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18,20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates-the inf-sup constants governing the convergence are mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.

  11. A Bayesian least squares support vector machines based framework for fault diagnosis and failure prognosis

    NASA Astrophysics Data System (ADS)

    Khawaja, Taimoor Saleem

    A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior

  12. TDRSS-user orbit determination using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, Mina V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

    1993-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), and operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the January 17-23, 1991, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were less than 40 meters after the filter had reached steady state.

  13. Evaluation of Landsat-4 orbit determination accuracy using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Feiertag, R.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

    1993-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite (TDRS) System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the May 18-24, 1992, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. During this period, there were two separate orbit-adjust maneuvers on one of the TDRSS spacecraft (TDRS-East) and one small orbit-adjust maneuver for Landsat-4. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were generally less than 30 meters after the filter had reached steady state.

  14. Compressible seal flow analysis using the finite element method with Galerkin solution technique

    NASA Technical Reports Server (NTRS)

    Zuk, J.

    1974-01-01

    High pressure gas sealing involves not only balancing the viscous force with the pressure gradient force but also accounting for fluid inertia--especially for choked flow. The conventional finite element method which uses a Rayleigh-Ritz solution technique is not convenient for nonlinear problems. For these problems, a finite element method with a Galerkin solution technique (FEMGST) was formulated. One example, a three-dimensional axisymmetric flow formulation has nonlinearities due to compressibility, area expansion, and convective inertia. Solutions agree with classical results in the limiting cases. The development of the choked flow velocity profile is shown.

  15. Determination of glucose concentration based on pulsed laser induced photoacoustic technique and least square fitting algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2015-08-01

    In this paper, a noninvasive glucose concentration monitoring setup based on the photoacoustic technique was established. In this setup, a 532nm pumped Q switched Nd: YAG tunable pulsed laser with repetition rate of 20Hz was used as the photoacoustic excitation light source, and a ultrasonic transducer with central response frequency of 9.55MHz was used as the detector of the photoacoustic signal of glucose. As the preliminary exploration of the blood glucose concentration, a series of in vitro photoacoustic monitoring of glucose aqueous solutions by using the established photoacoustic setup were performed. The photoacoustic peak-to-peak values of different concentrations of glucose aqueous solutions induced by the pulsed laser with output wavelength of 1300nm to 2300nm in interval of 10nm were obtained with the average times of 512. The differential spectral and the first order derivative spectral method were used to get the characteristic wavelengths. For the characteristic wavelengths of glucose, the least square fitting algorithm was used to establish the relationship between the glucose concentrations and photoacoustic peak-to-peak values. The characteristic wavelengths and the predicted concentrations of glucose solution were obtained. Experimental results demonstrated that the prediction effect of characteristic wavelengths of 1410nm and 1510nm were better than others, and this photoacoustic setup and analysis method had a certain potential value in the monitoring of the blood glucose concentration.

  16. Recursive least squares method of regression coefficients estimation as a special case of Kalman filter

    NASA Astrophysics Data System (ADS)

    Borodachev, S. M.

    2016-06-01

    The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.

  17. Least-squares finite element discretizations of neutron transport equations in 3 dimensions

    SciTech Connect

    Manteuffel, T.A; Ressel, K.J.; Starkes, G.

    1996-12-31

    The least-squares finite element framework to the neutron transport equation introduced in is based on the minimization of a least-squares functional applied to the properly scaled neutron transport equation. Here we report on some practical aspects of this approach for neutron transport calculations in three space dimensions. The systems of partial differential equations resulting from a P{sub 1} and P{sub 2} approximation of the angular dependence are derived. In the diffusive limit, the system is essentially a Poisson equation for zeroth moment and has a divergence structure for the set of moments of order 1. One of the key features of the least-squares approach is that it produces a posteriori error bounds. We report on the numerical results obtained for the minimum of the least-squares functional augmented by an additional boundary term using trilinear finite elements on a uniform tesselation into cubes.

  18. Adaptive slab laser beam quality improvement using a weighted least-squares reconstruction algorithm.

    PubMed

    Chen, Shanqiu; Dong, LiZhi; Chen, XiaoJun; Tan, Yi; Liu, Wenjin; Wang, Shuai; Yang, Ping; Xu, Bing; Ye, YuTang

    2016-04-10

    Adaptive optics is an important technology for improving beam quality in solid-state slab lasers. However, there are uncorrectable aberrations in partial areas of the beam. In the criterion of the conventional least-squares reconstruction method, it makes the zones with small aberrations nonsensitive and hinders this zone from being further corrected. In this paper, a weighted least-squares reconstruction method is proposed to improve the relative sensitivity of zones with small aberrations and to further improve beam quality. Relatively small weights are applied to the zones with large residual aberrations. Comparisons of results show that peak intensity in the far field improved from 1242 analog digital units (ADU) to 2248 ADU, and beam quality β improved from 2.5 to 2.0. This indicates the weighted least-squares method has better performance than the least-squares reconstruction method when there are large zonal uncorrectable aberrations in the slab laser system. PMID:27139877

  19. Multiple concurrent recursive least squares identification with application to on-line spacecraft mass-property identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward (Inventor)

    2006-01-01

    The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.

  20. Multi-element least square HDMR methods and their applications for stochastic multiscale model reduction

    SciTech Connect

    Jiang, Lijian Li, Xinping

    2015-08-01

    Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in each subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain

  1. On the accuracy of least squares methods in the presence of corner singularities

    NASA Technical Reports Server (NTRS)

    Cox, C. L.; Fix, G. J.

    1985-01-01

    Elliptic problems with corner singularities are discussed. Finite element approximations based on variational principles of the least squares type tend to display poor convergence properties in such contexts. Moreover, mesh refinement or the use of special singular elements do not appreciably improve matters. It is shown that if the least squares formulation is done in appropriately weighted space, then optimal convergence results in unweighted spaces like L(2).

  2. Least Squares Magnetic-Field Optimization for Portable Nuclear Magnetic Resonance Magnet Design

    SciTech Connect

    Paulsen, Jeffrey L; Franck, John; Demas, Vasiliki; Bouchard, Louis-S.

    2008-03-27

    Single-sided and mobile nuclear magnetic resonance (NMR) sensors have the advantages of portability, low cost, and low power consumption compared to conventional high-field NMR and magnetic resonance imaging (MRI) systems. We present fast, flexible, and easy-to-implement target field algorithms for mobile NMR and MRI magnet design. The optimization finds a global optimum ina cost function that minimizes the error in the target magnetic field in the sense of least squares. When the technique is tested on a ring array of permanent-magnet elements, the solution matches the classical dipole Halbach solution. For a single-sided handheld NMR sensor, the algorithm yields a 640 G field homogeneous to 16 100 ppm across a 1.9 cc volume located 1.5 cm above the top of the magnets and homogeneous to 32 200 ppm over a 7.6 cc volume. This regime is adequate for MRI applications. We demonstrate that the homogeneous region can be continuously moved away from the sensor by rotating magnet rod elements, opening the way for NMR sensors with adjustable"sensitive volumes."

  3. LP Norm SAR Tomography by Iteratively Rewighted Least Square: First Results on Hong Kong

    NASA Astrophysics Data System (ADS)

    Mancon, Simone; Tebaldini, Stefano; Monti Guarnieri, Andre

    2014-11-01

    Synthetic aperture radar tomography (TomoSAR) is the natural extension to 3-D of conventional 2-D Synthetic Aperture Radar (SAR) imaging. In this work, we focus on urban scenarios where targets of interest are point-like and radiometrically strong, i.e. the reflectivity profile in elevation is sparse. Accordingly, the method for TomoSAR imaging suggested in this work is based on Compressive Sensing (CS) theory. CS problems are typically solved by looking for the minimal solution in some Lp norm, where 0≤ p ≤ 1. The solution that minimizes an arbitrary Lp norm can be obtained using the Iteratively Reweighted Least Square (IRLS) algorithm. Based on an experimental comparison among different choices for p, the conclusion drawn is that the usual choice p = 1 is the best trade-off between resolution and robustness to noise. Results from real data will be discussed by reporting a TomoSAR reconstruction of an area in Hong Kong (China), acquired by COSMO-SkyMed.

  4. A Comparative Study of Different Reconstruction Schemes for a Reconstructed Discontinuous Galerkin Method on Arbitrary Grids

    SciTech Connect

    Hong Luo; Hanping Xiao; Robert Nourgaliev; Chunpei Cai

    2011-06-01

    A comparative study of different reconstruction schemes for a reconstruction-based discontinuous Galerkin, termed RDG(P1P2) method is performed for compressible flow problems on arbitrary grids. The RDG method is designed to enhance the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution via a reconstruction scheme commonly used in the finite volume method. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are implemented to obtain a quadratic polynomial representation of the underlying discontinuous Galerkin linear polynomial solution on each cell. These three reconstruction/recovery methods are compared for a variety of compressible flow problems on arbitrary meshes to access their accuracy and robustness. The numerical results demonstrate that all three reconstruction methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstruction method provides the best performance in terms of both accuracy and robustness.

  5. Least-Squares Regression and Spectral Residual Augmented Classical Least-Squares Chemometric Models for Stability-Indicating Analysis of Agomelatine and Its Degradation Products: A Comparative Study.

    PubMed

    Naguib, Ibrahim A; Abdelrahman, Maha M; El Ghobashy, Mohamed R; Ali, Nesma A

    2016-01-01

    Two accurate, sensitive, and selective stability-indicating methods are developed and validated for simultaneous quantitative determination of agomelatine (AGM) and its forced degradation products (Deg I and Deg II), whether in pure forms or in pharmaceutical formulations. Partial least-squares regression (PLSR) and spectral residual augmented classical least-squares (SRACLS) are two chemometric models that are being subjected to a comparative study through handling UV spectral data in range (215-350 nm). For proper analysis, a three-factor, four-level experimental design was established, resulting in a training set consisting of 16 mixtures containing different ratios of interfering species. An independent test set consisting of eight mixtures was used to validate the prediction ability of the suggested models. The results presented indicate the ability of mentioned multivariate calibration models to analyze AGM, Deg I, and Deg II with high selectivity and accuracy. The analysis results of the pharmaceutical formulations were statistically compared to the reference HPLC method, with no significant differences observed regarding accuracy and precision. The SRACLS model gives comparable results to the PLSR model; however, it keeps the qualitative spectral information of the classical least-squares algorithm for analyzed components. PMID:26987554

  6. A two-dimensional Riemann solver with self-similar sub-structure - Alternative formulation based on least squares projection

    NASA Astrophysics Data System (ADS)

    Balsara, Dinshaw S.; Vides, Jeaniffer; Gurski, Katharine; Nkonga, Boniface; Dumbser, Michael; Garain, Sudip; Audit, Edouard

    2016-01-01

    Just as the quality of a one-dimensional approximate Riemann solver is improved by the inclusion of internal sub-structure, the quality of a multidimensional Riemann solver is also similarly improved. Such multidimensional Riemann problems arise when multiple states come together at the vertex of a mesh. The interaction of the resulting one-dimensional Riemann problems gives rise to a strongly-interacting state. We wish to endow this strongly-interacting state with physically-motivated sub-structure. The self-similar formulation of Balsara [16] proves especially useful for this purpose. While that work is based on a Galerkin projection, in this paper we present an analogous self-similar formulation that is based on a different interpretation. In the present formulation, we interpret the shock jumps at the boundary of the strongly-interacting state quite literally. The enforcement of the shock jump conditions is done with a least squares projection (Vides, Nkonga and Audit [67]). With that interpretation, we again show that the multidimensional Riemann solver can be endowed with sub-structure. However, we find that the most efficient implementation arises when we use a flux vector splitting and a least squares projection. An alternative formulation that is based on the full characteristic matrices is also presented. The multidimensional Riemann solvers that are demonstrated here use one-dimensional HLLC Riemann solvers as building blocks. Several stringent test problems drawn from hydrodynamics and MHD are presented to show that the method works. Results from structured and unstructured meshes demonstrate the versatility of our method. The reader is also invited to watch a video introduction to multidimensional Riemann solvers on http://www.nd.edu/~dbalsara/Numerical-PDE-Course.

  7. On non-combinatorial weighted total least squares with inequality constraints

    NASA Astrophysics Data System (ADS)

    Fang, Xing

    2014-08-01

    Observation systems known as errors-in-variables (EIV) models with model parameters estimated by total least squares (TLS) have been discussed for more than a century, though the terms EIV and TLS were coined much more recently. So far, it has only been shown that the inequality-constrained TLS (ICTLS) solution can be obtained by the combinatorial methods, assuming that the weight matrices of observations involved in the data vector and the data matrix are identity matrices. Although the previous works test all combinations of active sets or solution schemes in a clear way, some aspects have received little or no attention such as admissible weights, solution characteristics and numerical efficiency. Therefore, the aim of this study was to adjust the EIV model, subject to linear inequality constraints. In particular, (1) This work deals with a symmetrical positive-definite cofactor matrix that could otherwise be quite arbitrary. It also considers cross-correlations between cofactor matrices for the random coefficient matrix and the random observation vector. (2) From a theoretical perspective, we present first-order Karush-Kuhn-Tucker (KKT) necessary conditions and the second-order sufficient conditions of the inequality-constrained weighted TLS (ICWTLS) solution by analytical formulation. (3) From a numerical perspective, an active set method without combinatorial tests as well as a method based on sequential quadratic programming (SQP) is established. By way of applications, computational costs of the proposed algorithms are shown to be significantly lower than the currently existing ICTLS methods. It is also shown that the proposed methods can treat the ICWTLS problem in the case of more general weight matrices. Finally, we study the ICWTLS solution in terms of non-convex weighted TLS contours from a geometrical perspective.

  8. A new formulation for total least square error method in d-dimensional space with mapping to a parametric line

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many practical applications based on the Least Square Error (LSE) or Total Least Square Error (TLSE) methods. Usually the standard least square error is used due to its simplicity, but it is not an optimal solution, as it does not optimize distance, but square of a distance. The TLSE method, respecting the orthogonality of a distance measurement, is computed in d-dimensional space, i.e. for points given in E2 a line π in E2, resp. for points given in E3 a plane ρ in E3, fitting the TLSE criteria are found. However, some tasks in physical sciences lead to a slightly different problem. In this paper, a new TSLE method is introduced for solving a problem when data are given in E3 a line π ∈ E3 is to be found fitting the TLSE criterion. The presented approach is applicable for a general d-dimensional case, i.e. when points are given in Ed a line π ∈ Ed is to be found. This formulation is different from the TLSE formulation.

  9. On the equivalence of generalized least-squares approaches to the evaluation of measurement comparisons

    NASA Astrophysics Data System (ADS)

    Koo, A.; Clare, J. F.

    2012-06-01

    Analysis of CIPM international comparisons is increasingly being carried out using a model-based approach that leads naturally to a generalized least-squares (GLS) solution. While this method offers the advantages of being easier to audit and having general applicability to any form of comparison protocol, there is a lack of consensus over aspects of its implementation. Two significant results are presented that show the equivalence of three differing approaches discussed by or applied in comparisons run by Consultative Committees of the CIPM. Both results depend on a mathematical condition equivalent to the requirement that any two artefacts in the comparison are linked through a sequence of measurements of overlapping pairs of artefacts. The first result is that a GLS estimator excluding all sources of error common to all measurements of a participant is equal to the GLS estimator incorporating all sources of error, including those associated with any bias in the standards or procedures of the measuring laboratory. The second result identifies the component of uncertainty in the estimate of bias that arises from possible systematic effects in the participants' measurement standards and procedures. The expression so obtained is a generalization of an expression previously published for a one-artefact comparison with no inter-participant correlations, to one for a comparison comprising any number of repeat measurements of multiple artefacts and allowing for inter-laboratory correlations.

  10. Computing ordinary least-squares parameter estimates for the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2013-01-01

    A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.

  11. Methods for Least Squares Data Smoothing by Adjustment of Divided Differences

    NASA Astrophysics Data System (ADS)

    Demetriou, I. C.

    2008-09-01

    A brief survey is presented for the main methods that are used in least squares data smoothing by adjusting the signs of divided differences of the smoothed values. The most distinctive feature of the smoothing approach is that it provides automatically a piecewise monotonic or a piecewise convex/concave fit to the data. The data are measured values of a function of one variable that contain random errors. As a consequence of the errors, the number of sign alterations in the sequence of mth divided differences is usually unacceptably large, where m is a prescribed positive integer. Therefore, we make the least sum of squares change to the measurements by requiring the sequence of the divided differences of order m to have at most k-1 sign changes, for some positive integer k. Although, it is a combinatorial problem, whose solution can require about O(nk) quadratic programming calculations in n variables and n-m constraints, where n is the number of data, very efficient algorithms have been developed for the cases when m = 1 or m = 2 and k is arbitrary, as well as when m>2 for small values of k. Attention is paid to the purpose of each method instead of to its details. Some software packages make the methods publicly accessible through library systems.

  12. Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization.

    PubMed

    Lu, Canyi; Lin, Zhouchen; Yan, Shuicheng

    2015-02-01

    This paper presents a general framework for solving the low-rank and/or sparse matrix minimization problems, which may involve multiple nonsmooth terms. The iteratively reweighted least squares (IRLSs) method is a fast solver, which smooths the objective function and minimizes it by alternately updating the variables and their weights. However, the traditional IRLS can only solve a sparse only or low rank only minimization problem with squared loss or an affine constraint. This paper generalizes IRLS to solve joint/mixed low-rank and sparse minimization problems, which are essential formulations for many tasks. As a concrete example, we solve the Schatten-p norm and l2,q-norm regularized low-rank representation problem by IRLS, and theoretically prove that the derived solution is a stationary point (globally optimal if p,q ≥ 1). Our convergence proof of IRLS is more general than previous one that depends on the special properties of the Schatten-p norm and l2,q-norm. Extensive experiments on both synthetic and real data sets demonstrate that our IRLS is much more efficient. PMID:25531948

  13. First-order system least-squares for the Helmholtz equation

    SciTech Connect

    Lee, B.; Manteuffel, T.; McCormick, S.; Ruge, J.

    1996-12-31

    We apply the FOSLS methodology to the exterior Helmholtz equation {Delta}p + k{sup 2}p = 0. Several least-squares functionals, some of which include both H{sup -1}({Omega}) and L{sup 2}({Omega}) terms, are examined. We show that in a special subspace of [H(div; {Omega}) {intersection} H(curl; {Omega})] x H{sup 1}({Omega}), each of these functionals are equivalent independent of k to a scaled H{sup 1}({Omega}) norm of p and u = {del}p. This special subspace does not include the oscillatory near-nullspace components ce{sup ik}({sup {alpha}x+{beta}y)}, where c is a complex vector and where {alpha}{sub 2} + {beta}{sup 2} = 1. These components are eliminated by applying a non-standard coarsening scheme. We achieve this scheme by introducing {open_quotes}ray{close_quotes} basis functions which depend on the parameter pair ({alpha}, {beta}), and which approximate ce{sup ik}({sup {alpha}x+{beta}y)} well on the coarser levels where bilinears cannot. We use several pairs of these parameters on each of these coarser levels so that several coarse grid problems are spun off from the finer levels. Some extensions of this theory to the transverse electric wave solution for Maxwell`s equations will also be presented.

  14. Analysis of crustal deformation and strain characteristics in the Tianshan Mountains with least-squares collocation

    NASA Astrophysics Data System (ADS)

    Li, S. P.; Chen, G.; Li, J. W.

    2015-11-01

    By fitting the observed velocity field of the Tianshan Mountains from 1992 to 2006 with least-squares collocation, we established a velocity field model in this region. The velocity field model reflects the crustal deformation characteristics of the Tianshan reasonably well. From the Tarim Basin to the Junggar Basin and Kazakh platform, the crustal deformation decreases gradually. Divided at 82° E, the convergence rates in the west are obviously higher than those in the east. We also calculated the parameter values for crustal strain in the Tianshan Mountains. The results for maximum shear strain exhibited a concentration of significantly high values at Wuqia and its western regions, and the values reached a maxima of 4.4×10-8 a-1. According to isogram distributions for the surface expansion rate, we found evidence that the Tianshan Mountains have been suffering from strong lateral extrusion by the basin on both sides. Combining this analysis with existing results for focal mechanism solutions from 1976 to 2014, we conclude that it should be easy for a concentration of earthquake events to occur in regions where maximum shear strains accumulate or mutate. For the Tianshan Mountains, the possibility of strong earthquakes in Wuqia-Jiashi and Lake Issyk-Kul will persist over the long term.

  15. Reconstruction of vibroacoustic responses of a highly nonspherical structure using Helmholtz equation least-squares method.

    PubMed

    Lu, Huancai; Wu, Sean F

    2009-03-01

    The vibroacoustic responses of a highly nonspherical vibrating object are reconstructed using Helmholtz equation least-squares (HELS) method. The objectives of this study are to examine the accuracy of reconstruction and the impacts of various parameters involved in reconstruction using HELS. The test object is a simply supported and baffled thin plate. The reason for selecting this object is that it represents a class of structures that cannot be exactly described by the spherical Hankel functions and spherical harmonics, which are taken as the basis functions in the HELS formulation, yet the analytic solutions to vibroacoustic responses of a baffled plate are readily available so the accuracy of reconstruction can be checked accurately. The input field acoustic pressures for reconstruction are generated by the Rayleigh integral. The reconstructed normal surface velocities are validated against the benchmark values, and the out-of-plane vibration patterns at several natural frequencies are compared with the natural modes of a simply supported plate. The impacts of various parameters such as number of measurement points, measurement distance, location of the origin of the coordinate system, microphone spacing, and ratio of measurement aperture size to the area of source surface of reconstruction on the resultant accuracy of reconstruction are examined. PMID:19275312

  16. Least squares regression methods for clustered ROC data with discrete covariates.

    PubMed

    Tang, Liansheng Larry; Zhang, Wei; Li, Qizhai; Ye, Xuan; Chan, Leighton

    2016-07-01

    The receiver operating characteristic (ROC) curve is a popular tool to evaluate and compare the accuracy of diagnostic tests to distinguish the diseased group from the nondiseased group when test results from tests are continuous or ordinal. A complicated data setting occurs when multiple tests are measured on abnormal and normal locations from the same subject and the measurements are clustered within the subject. Although least squares regression methods can be used for the estimation of ROC curve from correlated data, how to develop the least squares methods to estimate the ROC curve from the clustered data has not been studied. Also, the statistical properties of the least squares methods under the clustering setting are unknown. In this article, we develop the least squares ROC methods to allow the baseline and link functions to differ, and more importantly, to accommodate clustered data with discrete covariates. The methods can generate smooth ROC curves that satisfy the inherent continuous property of the true underlying curve. The least squares methods are shown to be more efficient than the existing nonparametric ROC methods under appropriate model assumptions in simulation studies. We apply the methods to a real example in the detection of glaucomatous deterioration. We also derive the asymptotic properties of the proposed methods. PMID:26848938

  17. Algorithms and architectures for adaptive least squares signal processing, with applications in magnetoencephalography

    SciTech Connect

    Lewis, P.S.

    1988-10-01

    Least squares techniques are widely used in adaptive signal processing. While algorithms based on least squares are robust and offer rapid convergence properties, they also tend to be complex and computationally intensive. To enable the use of least squares techniques in real-time applications, it is necessary to develop adaptive algorithms that are efficient and numerically stable, and can be readily implemented in hardware. The first part of this work presents a uniform development of general recursive least squares (RLS) algorithms, and multichannel least squares lattice (LSL) algorithms. RLS algorithms are developed for both direct estimators, in which a desired signal is present, and for mixed estimators, in which no desired signal is available, but the signal-to-data cross-correlation is known. In the second part of this work, new and more flexible techniques of mapping algorithms to array architectures are presented. These techniques, based on the synthesis and manipulation of locally recursive algorithms (LRAs), have evolved from existing data dependence graph-based approaches, but offer the increased flexibility needed to deal with the structural complexities of the RLS and LSL algorithms. Using these techniques, various array architectures are developed for each of the RLS and LSL algorithms and the associated space/time tradeoffs presented. In the final part of this work, the application of these algorithms is demonstrated by their employment in the enhancement of single-trial auditory evoked responses in magnetoencephalography. 118 refs., 49 figs., 36 tabs.

  18. A Two-Layer Least Squares Support Vector Machine Approach to Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Liu, Jingli; Li, Jianping; Xu, Weixuan; Shi, Yong

    Least squares support vector machine (LS-SVM) is a revised version of support vector machine (SVM) and has been proved to be a useful tool for pattern recognition. LS-SVM had excellent generalization performance and low computational cost. In this paper, we propose a new method called two-layer least squares support vector machine which combines kernel principle component analysis (KPCA) and linear programming form of least square support vector machine. With this method sparseness and robustness is obtained while solving large dimensional and large scale database. A U.S. commercial credit card database is used to test the efficiency of our method and the result proved to be a satisfactory one.

  19. Least-squares reverse-time migration of Cranfield VSP data for monitoring CO2 injection

    NASA Astrophysics Data System (ADS)

    TAN, S.; Huang, L.

    2012-12-01

    Cost-effective monitoring for carbon utilization and sequestration requires high-resolution imaging with a minimal amount of data. Least-squares reverse-time migration is a promising imaging method for this purpose. We apply least-squares reverse-time migration to a portion of the 3D vertical seismic profile data acquired at the Cranfield enhanced oil recovery field in Mississippi for monitoring CO2 injection. Conventional reverse-time migration of limited data suffers from significant image artifacts and a poor image resolution. Lease-squares reverse-time migration can reduce image artifacts and improves the image resolution. We demonstrate the significant improvements of least-squares reverse-time migration by comparing its migration images of the Cranfield VSP data with that obtained using the conventional reverse-time migration.

  20. [Partial least squares regression variable screening studies on apple soluble solids NIR spectral detection].

    PubMed

    Ouyang, Ai-Guo; Xie, Xiao-Qiang; Zhou, Yan-Rui; Liu, Yan-De

    2012-10-01

    Abstract To improve the predictive ability and robustness of the NIR correction model of the soluble solid content (SSC) of apple, the reverse interval partial least squares method, genetic algorithm and the continuous projection method were implemented to select variables of the NIR spectroscopy of the soluble solid content (SSC) of apple, and the partial least squares regression model was established. By genetic algorithm for screening of the 141 variables of the correction model, prediction has the best effect. And compared to the full spectrum correction model, the correlation coefficient increased to 0.96 from 0.93, forecast root mean square error decreased from 0.30 degrees Brix to 0.23 degrees Brix. This experimental results show that the genetic algorithm combined with partial least squares regression method improved the detection precision of the NIR model of the soluble solid content (SSC) of apple. PMID:23285864

  1. SENSOP: A Derivative-Free Solver for Nonlinear Least Squares with Sensitivity Scaling

    PubMed Central

    Chan, I.S.; Goldstein, A.A.; Bassingthwaighte, J.B.

    2010-01-01

    Nonlinear least squares optimization is used most often in fitting a complex model to a set of data. An ordinary nonlinear least squares optimizer assumes a constant variance for all the data points. This paper presents SENSOP, a weighted nonlinear least squares optimizer, which is designed for fitting a model to a set of data where the variance may or may not be constant. It uses a variant of the Levenberg–Marquardt method to calculate the direction and the length of the step change in the parameter vector. The method for estimating appropriate weighting functions applies generally to 1-dimensional signals and can be used for higher dimensional signals. Sets of multiple tracer outflow dilution curves present special problems because the data encompass three to four orders of magnitude; a fractional power function provides appropriate weighting giving success in parameter estimation despite the wide range. PMID:8116914

  2. Tropospheric refractivity and zenith path delays from least-squares collocation of meteorological and GNSS data

    NASA Astrophysics Data System (ADS)

    Wilgan, Karina; Hurter, Fabian; Geiger, Alain; Rohm, Witold; Bosy, Jarosław

    2016-08-01

    Precise positioning requires an accurate a priori troposphere model to enhance the solution quality. Several empirical models are available, but they may not properly characterize the state of troposphere, especially in severe weather conditions. Another possible solution is to use regional troposphere models based on real-time or near-real time measurements. In this study, we present the total refractivity and zenith total delay (ZTD) models based on a numerical weather prediction (NWP) model, Global Navigation Satellite System (GNSS) data and ground-based meteorological observations. We reconstruct the total refractivity profiles over the western part of Switzerland and the total refractivity profiles as well as ZTDs over Poland using the least-squares collocation software COMEDIE (Collocation of Meteorological Data for Interpretation and Estimation of Tropospheric Pathdelays) developed at ETH Zürich. In these two case studies, profiles of the total refractivity and ZTDs are calculated from different data sets. For Switzerland, the data set with the best agreement with the reference radiosonde (RS) measurements is the combination of ground-based meteorological observations and GNSS ZTDs. Introducing the horizontal gradients does not improve the vertical interpolation, and results in slightly larger biases and standard deviations. For Poland, the data set based on meteorological parameters from the NWP Weather Research and Forecasting (WRF) model and from a combination of the NWP model and GNSS ZTDs shows the best agreement with the reference RS data. In terms of ZTD, the combined NWP-GNSS observations and GNSS-only data set exhibit the best accuracy with an average bias (from all stations) of 3.7 mm and average standard deviations of 17.0 mm w.r.t. the reference GNSS stations.

  3. Inversion Theorem Based Kernel Density Estimation for the Ordinary Least Squares Estimator of a Regression Coefficient

    PubMed Central

    Wang, Dongliang; Hutson, Alan D.

    2016-01-01

    The traditional confidence interval associated with the ordinary least squares estimator of linear regression coefficient is sensitive to non-normality of the underlying distribution. In this article, we develop a novel kernel density estimator for the ordinary least squares estimator via utilizing well-defined inversion based kernel smoothing techniques in order to estimate the conditional probability density distribution of the dependent random variable. Simulation results show that given a small sample size, our method significantly increases the power as compared with Wald-type CIs. The proposed approach is illustrated via an application to a classic small data set originally from Graybill (1961). PMID:26924882

  4. A new algorithm for constrained nonlinear least-squares problems, part 1

    NASA Technical Reports Server (NTRS)

    Hanson, R. J.; Krogh, F. T.

    1983-01-01

    A Gauss-Newton algorithm is presented for solving nonlinear least squares problems. The problem statement may include simple bounds or more general constraints on the unknowns. The algorithm uses a trust region that allows the objective function to increase with logic for retreating to best values. The computations for the linear problem are done using a least squares system solver that allows for simple bounds and linear constraints. The trust region limits are defined by a box around the current point. In its current form the algorithm is effective only for problems with small residuals, linear constraints and dense Jacobian matrices. Results on a set of test problems are encouraging.

  5. Least square neural network model of the crude oil blending process.

    PubMed

    Rubio, José de Jesús

    2016-06-01

    In this paper, the recursive least square algorithm is designed for the big data learning of a feedforward neural network. The proposed method as the combination of the recursive least square and feedforward neural network obtains four advantages over the alone algorithms: it requires less number of regressors, it is fast, it has the learning ability, and it is more compact. Stability, convergence, boundedness of parameters, and local minimum avoidance of the proposed technique are guaranteed. The introduced strategy is applied for the modeling of the crude oil blending process. PMID:26992706

  6. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

  7. Least square based method for obtaining one-particle spectral functions from temperature Green functions

    NASA Astrophysics Data System (ADS)

    Liu, Jun

    2013-02-01

    A least square based fitting scheme is proposed to extract an optimal one-particle spectral function from any one-particle temperature Green function. It uses the existing non-negative least square (NNLS) fit algorithm to do the fit, and Tikhonov regularization to help with possible numerical singular behaviors. By flexibly adding delta peaks to represent very sharp features of the target spectrum, this scheme guarantees a global minimization of the fitted residue. The performance of this scheme is manifested with diverse physical examples. The proposed scheme is shown to be comparable in performance to the standard Padé analytic continuation scheme.

  8. Analysis of total least squares in estimating the parameters of a mortar trajectory

    SciTech Connect

    Lau, D.L.; Ng, L.C.

    1994-12-01

    Least Squares (LS) is a method of curve fitting used with the assumption that error exists in the observation vector. The method of Total Least Squares (TLS) is more useful in cases where there is error in the data matrix as well as the observation vector. This paper describes work done in comparing the LS and TLS results for parameter estimation of a mortar trajectory based on a time series of angular observations. To improve the results, we investigated several derivations of the LS and TLS methods, and early findings show TLS provided slightly, 10%, improved results over the LS method.

  9. MLAMBDA: a modified LAMBDA method for integer least-squares estimation

    NASA Astrophysics Data System (ADS)

    Chang, X.-W.; Yang, X.; Zhou, T.

    2005-12-01

    The least-squares ambiguity Decorrelation (LAMBDA) method has been widely used in GNSS for fixing integer ambiguities. It can also solve any integer least squares (ILS) problem arising from other applications. For real time applications with high dimensions, the computational speed is crucial. A modified LAMBDA (MLAMBDA) method is presented. Several strategies are proposed to reduce the computational complexity of the LAMBDA method. Numerical simulations show that MLAMBDA is (much) faster than LAMBDA. The relations between the LAMBDA method and some relevant methods in the information theory literature are pointed out when we introduce its main procedures.

  10. Landsat-4 (TDRSS-user) orbit determination using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

    1992-01-01

    TDRSS user orbit determination is analyzed using a batch least-squares method and a sequential estimation method. It was found that in the batch least-squares method analysis, the orbit determination consistency for Landsat-4, which was heavily tracked by TDRSS during January 1991, was about 4 meters in the rms overlap comparisons and about 6 meters in the maximum position differences in overlap comparisons. The consistency was about 10 to 30 meters in the 3 sigma state error covariance function in the sequential method analysis. As a measure of consistency, the first residual of each pass was within the 3 sigma bound in the residual space.

  11. Explicit least squares system parameter identification for exact differential input/output models

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1993-01-01

    The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.

  12. Fruit fly optimization based least square support vector regression for blind image restoration

    NASA Astrophysics Data System (ADS)

    Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei

    2014-11-01

    The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and

  13. AVIRIS study of Death Valley evaporite deposits using least-squares band-fitting methods

    NASA Technical Reports Server (NTRS)

    Crowley, J. K.; Clark, R. N.

    1992-01-01

    Minerals found in playa evaporite deposits reflect the chemically diverse origins of ground waters in arid regions. Recently, it was discovered that many playa minerals exhibit diagnostic visible and near-infrared (0.4-2.5 micron) absorption bands that provide a remote sensing basis for observing important compositional details of desert ground water systems. The study of such systems is relevant to understanding solute acquisition, transport, and fractionation processes that are active in the subsurface. Observations of playa evaporites may also be useful for monitoring the hydrologic response of desert basins to changing climatic conditions on regional and global scales. Ongoing work using Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data to map evaporite minerals in the Death Valley salt pan is described. The AVIRIS data point to differences in inflow water chemistry in different parts of the Death Valley playa system and have led to the discovery of at least two new North American mineral occurrences. Seven segments of AVIRIS data were acquired over Death Valley on 31 July 1990, and were calibrated to reflectance by using the spectrum of a uniform area of alluvium near the salt pan. The calibrated data were subsequently analyzed by using least-squares spectral band-fitting methods, first described by Clark and others. In the band-fitting procedure, AVIRIS spectra are fit compared over selected wavelength intervals to a series of library reference spectra. Output images showing the degree of fit, band depth, and fit times the band depth are generated for each reference spectrum. The reference spectra used in the study included laboratory data for 35 pure evaporite spectra extracted from the AVIRIS image cube. Additional details of the band-fitting technique are provided by Clark and others elsewhere in this volume.

  14. Fast integer least-squares estimation for GNSS high-dimensional ambiguity resolution using lattice theory

    NASA Astrophysics Data System (ADS)

    Jazaeri, S.; Amiri-Simkooei, A. R.; Sharifi, M. A.

    2012-02-01

    GNSS ambiguity resolution is the key issue in the high-precision relative geodetic positioning and navigation applications. It is a problem of integer programming plus integer quality evaluation. Different integer search estimation methods have been proposed for the integer solution of ambiguity resolution. Slow rate of convergence is the main obstacle to the existing methods where tens of ambiguities are involved. Herein, integer search estimation for the GNSS ambiguity resolution based on the lattice theory is proposed. It is mathematically shown that the closest lattice point problem is the same as the integer least-squares (ILS) estimation problem and that the lattice reduction speeds up searching process. We have implemented three integer search strategies: Agrell, Eriksson, Vardy, Zeger (AEVZ), modification of Schnorr-Euchner enumeration (M-SE) and modification of Viterbo-Boutros enumeration (M-VB). The methods have been numerically implemented in several simulated examples under different scenarios and over 100 independent runs. The decorrelation process (or unimodular transformations) has been first used to transform the original ILS problem to a new one in all simulations. We have then applied different search algorithms to the transformed ILS problem. The numerical simulations have shown that AEVZ, M-SE, and M-VB are about 320, 120 and 50 times faster than LAMBDA, respectively, for a search space of dimension 40. This number could change to about 350, 160 and 60 for dimension 45. The AEVZ is shown to be faster than MLAMBDA by a factor of 5. Similar conclusions could be made using the application of the proposed algorithms to the real GPS data.

  15. Least squares estimation of Generalized Space Time AutoRegressive (GSTAR) model and its properties

    NASA Astrophysics Data System (ADS)

    Ruchjana, Budi Nurani; Borovkova, Svetlana A.; Lopuhaa, H. P.

    2012-05-01

    In this paper we studied a least squares estimation parameters of the Generalized Space Time AutoRegressive (GSTAR) model and its properties, especially in consistency and asymptotic normality. We use R software to estimate the GSTAR parameter and apply the model toward real phenomena of data, such as an oil production data at volcanic layer.

  16. ON ASYMPTOTIC DISTRIBUTION AND ASYMPTOTIC EFFICIENCY OF LEAST SQUARES ESTIMATORS OF SPATIAL VARIOGRAM PARAMETERS. (R827257)

    EPA Science Inventory

    Abstract

    In this article, we consider the least-squares approach for estimating parameters of a spatial variogram and establish consistency and asymptotic normality of these estimators under general conditions. Large-sample distributions are also established under a sp...

  17. Assessing Compliance-Effect Bias in the Two Stage Least Squares Estimator

    ERIC Educational Resources Information Center

    Reardon, Sean; Unlu, Fatih; Zhu, Pei; Bloom, Howard

    2011-01-01

    The proposed paper studies the bias in the two-stage least squares, or 2SLS, estimator that is caused by the compliance-effect covariance (hereafter, the compliance-effect bias). It starts by deriving the formula for the bias in an infinite sample (i.e., in the absence of finite sample bias) under different circumstances. Specifically, it…

  18. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile intervals, and…

  19. Using Technology to Optimize and Generalize: The Least-Squares Line

    ERIC Educational Resources Information Center

    Burke, Maurice J.; Hodgson, Ted R.

    2007-01-01

    With the help of technology and a basic high school algebra method for finding the vertex of a quadratic polynomial, students can develop and prove the formula for least-squares lines. Students are exposed to the power of a computer algebra system to generalize processes they understand and to see deeper patterns in those processes. (Contains 4…

  20. Using R^2 to compare least-squares fit models: When it must fail

    Technology Transfer Automated Retrieval System (TEKTRAN)

    R^2 can be used correctly to select from among competing least-squares fit models when the data are fitted in common form and with common weighting. However, then R^2 comparisons become equivalent to comparisons of the estimated fit variance s^2 in unweighted fitting, or of the reduced chi-square in...

  1. Interpreting the Results of Weighted Least-Squares Regression: Caveats for the Statistical Consumer.

    ERIC Educational Resources Information Center

    Willett, John B.; Singer, Judith D.

    In research, data sets often occur in which the variance of the distribution of the dependent variable at given levels of the predictors is a function of the values of the predictors. In this situation, the use of weighted least-squares (WLS) or techniques is required. Weights suitable for use in a WLS regression analysis must be estimated. A…

  2. Noise suppression using preconditioned least-squares prestack time migration: application to the Mississippian limestone

    NASA Astrophysics Data System (ADS)

    Guo, Shiguang; Zhang, Bo; Wang, Qing; Cabrales-Vargas, Alejandro; Marfurt, Kurt J.

    2016-08-01

    Conventional Kirchhoff migration often suffers from artifacts such as aliasing and acquisition footprint, which come from sub-optimal seismic acquisition. The footprint can mask faults and fractures, while aliased noise can focus into false coherent events which affect interpretation and contaminate amplitude variation with offset, amplitude variation with azimuth and elastic inversion. Preconditioned least-squares migration minimizes these artifacts. We implement least-squares migration by minimizing the difference between the original data and the modeled demigrated data using an iterative conjugate gradient scheme. Unpreconditioned least-squares migration better estimates the subsurface amplitude, but does not suppress aliasing. In this work, we precondition the results by applying a 3D prestack structure-oriented LUM (lower–upper–middle) filter to each common offset and common azimuth gather at each iteration. The preconditioning algorithm not only suppresses aliasing of both signal and noise, but also improves the convergence rate. We apply the new preconditioned least-squares migration to the Marmousi model and demonstrate how it can improve the seismic image compared with conventional migration, and then apply it to one survey acquired over a new resource play in the Mid-Continent, USA. The acquisition footprint from the targets is attenuated and the signal to noise ratio is enhanced. To demonstrate the impact on interpretation, we generate a suite of seismic attributes to image the Mississippian limestone, and show that the karst-enhanced fractures in the Mississippian limestone can be better illuminated.

  3. SAS MACRO LANGUAGE PROGRAM FOR PARTIAL LEAST SQUARES REGRESSION OF SPECTRAL DATA

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A computer program was written in the SAS language for the purpose of examining the effect of spectral pretreatments on partial least squares regression of near-infrared (or similarly structured) data. The program operates in an unattended batch mode, in which the user may specify a number of commo...

  4. A Comparison of the Method of Least Squares and the Method of Averages. Classroom Notes

    ERIC Educational Resources Information Center

    Glaister, P.

    2004-01-01

    Two techniques for determining a straight line fit to data are compared. This article reviews two simple techniques for fitting a straight line to a set of data, namely the method of averages and the method of least squares. These methods are compared by showing the results of a simple analysis, together with a number of tests based on randomized…

  5. A Geometric Analysis of when Fixed Weighting Schemes Will Outperform Ordinary Least Squares

    ERIC Educational Resources Information Center

    Davis-Stober, Clintin P.

    2011-01-01

    Many researchers have demonstrated that fixed, exogenously chosen weights can be useful alternatives to Ordinary Least Squares (OLS) estimation within the linear model (e.g., Dawes, Am. Psychol. 34:571-582, 1979; Einhorn & Hogarth, Org. Behav. Human Perform. 13:171-192, 1975; Wainer, Psychol. Bull. 83:213-217, 1976). Generalizing the approach of…

  6. Linking Socioeconomic Status to Social Cognitive Career Theory Factors: A Partial Least Squares Path Modeling Analysis

    ERIC Educational Resources Information Center

    Huang, Jie-Tsuen; Hsieh, Hui-Hsien

    2011-01-01

    The purpose of this study was to investigate the contributions of socioeconomic status (SES) in predicting social cognitive career theory (SCCT) factors. Data were collected from 738 college students in Taiwan. The results of the partial least squares (PLS) analyses indicated that SES significantly predicted career decision self-efficacy (CDSE);…

  7. Characterization of Titan 3-D acoustic pressure spectra by least-squares fit to theoretical model

    NASA Astrophysics Data System (ADS)

    Hartnett, E. B.; Carleen, E.

    1980-01-01

    A theoretical model for the acoustic spectra of undeflected rocket plumes is fitted to computed spectra of a Titan III-D at varying times after ignition, by a least-squares method. Tests for the goodness of the fit are made.

  8. Finite volume - space-time discontinuous Galerkin method for the solution of compressible turbulent flow

    NASA Astrophysics Data System (ADS)

    Česenek, Jan

    2016-03-01

    In this article we deal with numerical simulation of the non-stationary compressible turbulent flow. Compressible turbulent flow is described by the Reynolds-Averaged Navier-Stokes (RANS) equations. This RANS system is equipped with two-equation k-omega turbulence model. These two systems of equations are solved separately. Discretization of the RANS system is carried out by the space-time discontinuous Galerkin method which is based on piecewise polynomial discontinuous approximation of the sought solution in space and in time. Discretization of the two-equation k-omega turbulence model is carried out by the implicit finite volume method, which is based on piecewise constant approximation of the sought solution. We present some numerical experiments to demonstrate the applicability of the method using own-developed code.

  9. Deterministic solution of the Boltzmann equation using a discontinuous Galerkin velocity discretization

    NASA Astrophysics Data System (ADS)

    Alekseenko, A.; Josyula, E.

    2012-11-01

    We propose an approach for high order discretization of the Boltzmann equation in the velocity space using discontinuous Galerkin methods. Our approach employs a reformulation of the collision integral in the form of a bilinear operator with a time-independent kernel. In the fully non-linear case the complexity of the method is O(n8) operations per spatial cell where n is the number of degrees of freedom in one velocity direction. The new method is suitable for parallelization to a large number of processors. Techniques of automatic perturbation decomposition and linearisation are developed to achieve additional performance improvement. The number of operations per spatial cell in the linearised regime is O(n6). The method is applied to the solution of the spatially homogeneous relaxation problem. Mass momentum and energy is conserved to a good precision in the computed solutions.

  10. Non linear Least Squares(Levenberg-Marquardt algorithms) for geodetic adjustment and coordinates transformation.

    NASA Astrophysics Data System (ADS)

    Kheloufi, N.; Kahlouche, S.; Lamara, R. Ait Ahmed

    2009-04-01

    The resolution of the MRE's (Multiple Regression Equations) is an important tool for fitting different geodetic network. Nevertheless, in different fields of engineering and earth science, certain cases need more accuracy; the ordinary least squares (linear least squares) prove to be limited. Thus, we have to use new numerical methods of resolution that can provide a great efficiency of polynomial modelisation. In geodesy the accuracy of coordinates determination and network adjustment is very important, that's why instead of being limited to the linear models, we have to apply the non linear least squares resolution for the transformation problem between geodetic systems. This need, appears especially in the case of Nord-Sahara datum (Algeria), on wich the linear models are not much appropriate, because of the lack of information about the geoid's undulation. In this paper, we have fixed as main aim, to carry out the importance of using non linear least squares to improve the quality of geodetic adjustment and coordinates transformation and even the extent of his use. The algorithms carried out concerned the application of two models: three dimensions (global transformation) and the two-dimension one (local transformation) over huge area (Algeria). We compute coordinates transformation parameters and their Rms by both of the ordinary least squares and new algorithms, then we perform a statistical analysis in order to compare on the one hand between the linear adjustment with its two variants (local and global) and the non linear one. In this context, a set of 16 benchmark, have been integrated to compute the transformation parameters (3D and 2D). Different non linear optimization algorithms (Newton algorithm, Steepest Descent, and Levenberg-Marquardt) have been implemented to solve transformation problem. Conclusions and recommendations are given with respect to the suitability, accuracy and efficiency of each method. Key words: MRE's, Nord Sahara, global

  11. Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report

    SciTech Connect

    Saad, Yousef

    2014-01-16

    The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the

  12. Weighted linear least squares problem: an interval analysis approach to rank determination

    SciTech Connect

    Manteuffel, T. A.

    1980-08-01

    This is an extension of the work in SAND--80-0655 to the weighted linear least squares problem. Given the weighted linear least squares problem WAx approx. = Wb, where W is a diagonal weighting matrix, and bounds on the uncertainty in the elements of A, we define an interval matrix A/sup I/ that contains all perturbations of A due to these uncertainties and say that the problem is rank deficient if any member of A/sup I/ is rank deficient. It is shown that, if WA = QR is the QR decomposition of WA, then Q and R/sup -1/ can be used to bound the rank of A/sup I/. A modification of the Modified Gram--Schmidt QR decomposition yields an algorithm that implements these results. The extra arithmetic is 0(MN). Numerical results show the algorithm to be effective on problems in which the weights vary greatly in magnitude.

  13. Partial least squares regression on DCT domain for infrared face recognition

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua

    2014-09-01

    Compact and discriminative feature extraction is a challenging task for infrared face recognition. In this paper, we propose an infrared face recognition method using Partial Least Square (PLS) regression on Discrete Cosine Transform (DCT) coefficients. With the strong ability for data de-correlation and compact energy, DCT is studied to get the compact features in infrared face. To dig out discriminative information in DCT coefficients, class-specific One-to-Rest Partial Least Squares (PLS) classifier is learned for accurate classification. The infrared data were collected by an infrared camera Thermo Vision A40 supplied by FLIR Systems Inc. The experimental results show that the recognition rate of the proposed algorithm can reach 95.8%, outperforms that of the state of art infrared face recognition methods based on Linear Discriminant Analysis (LDA) and DCT.

  14. Real-Time Adaptive Least-Squares Drag Minimization for Performance Adaptive Aeroelastic Wing

    NASA Technical Reports Server (NTRS)

    Ferrier, Yvonne L.; Nguyen, Nhan T.; Ting, Eric

    2016-01-01

    This paper contains a simulation study of a real-time adaptive least-squares drag minimization algorithm for an aeroelastic model of a flexible wing aircraft. The aircraft model is based on the NASA Generic Transport Model (GTM). The wing structures incorporate a novel aerodynamic control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF). The drag minimization algorithm uses the Newton-Raphson method to find the optimal VCCTEF deflections for minimum drag in the context of an altitude-hold flight control mode at cruise conditions. The aerodynamic coefficient parameters used in this optimization method are identified in real-time using Recursive Least Squares (RLS). The results demonstrate the potential of the VCCTEF to improve aerodynamic efficiency for drag minimization for transport aircraft.

  15. Time-Series INSAR: An Integer Least-Squares Approach For Distributed Scatterers

    NASA Astrophysics Data System (ADS)

    Samiei-Esfahany, Sami; Hanssen, Ramon F.

    2012-01-01

    The objective of this research is to extend the geode- tic mathematical model which was developed for persistent scatterers to a model which can exploit distributed scatterers (DS). The main focus is on the integer least- squares framework, and the main challenge is to include the decorrelation effect in the mathematical model. In order to adapt the integer least-squares mathematical model for DS we altered the model from a single master to a multi-master configuration and introduced the decorrelation effect stochastically. This effect is described in our model by a full covariance matrix. We propose to de- rive this covariance matrix by numerical integration of the (joint) probability distribution function (PDF) of interferometric phases. This PDF is a function of coherence values and can be directly computed from radar data. We show that the use of this model can improve the performance of temporal phase unwrapping of distributed scatterers.

  16. Quantitative Analysis of Isotope Distributions In Proteomic Mass Spectrometry Using Least-Squares Fourier Transform Convolution

    PubMed Central

    Sperling, Edit; Bunner, Anne E.; Sykes, Michael T.; Williamson, James R.

    2008-01-01

    Quantitative proteomic mass spectrometry involves comparison of the amplitudes of peaks resulting from different isotope labeling patterns, including fractional atomic labeling and fractional residue labeling. We have developed a general and flexible analytical treatment of the complex isotope distributions that arise in these experiments, using Fourier transform convolution to calculate labeled isotope distributions and least-squares for quantitative comparison with experimental peaks. The degree of fractional atomic and fractional residue labeling can be determined from experimental peaks at the same time as the integrated intensity of all of the isotopomers in the isotope distribution. The approach is illustrated using data with fractional 15N-labeling and fractional 13C-isoleucine labeling. The least-squares Fourier transform convolution approach can be applied to many types of quantitive proteomic data, including data from stable isotope labeling by amino acids in cell culture and pulse labeling experiments. PMID:18522437

  17. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    NASA Astrophysics Data System (ADS)

    Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing

    2015-06-01

    In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  18. A note on the total least squares problem for coplanar points

    SciTech Connect

    Lee, S.L.

    1994-09-01

    The Total Least Squares (TLS) fit to the points (x{sub k}, y{sub k}), k = 1, {hor_ellipsis}, n, minimizes the sum of the squares of the perpendicular distances from the points to the line. This sum is the TLS error, and minimizing its magnitude is appropriate if x{sub k} and y{sub k} are uncertain. A priori formulas for the TLS fit and TLS error to coplanar points were originally derived by Pearson, and they are expressed in terms of the mean, standard deviation and correlation coefficient of the data. In this note, these TLS formulas are derived in a more elementary fashion. The TLS fit is obtained via the ordinary least squares problem and the algebraic properties of complex numbers. The TLS error is formulated in terms of the triangle inequality for complex numbers.

  19. Evaluation of fatty proportion in fatty liver using least squares method with constraints.

    PubMed

    Li, Xingsong; Deng, Yinhui; Yu, Jinhua; Wang, Yuanyuan; Shamdasani, Vijay

    2014-01-01

    Backscatter and attenuation parameters are not easily measured in clinical applications due to tissue inhomogeneity in the region of interest (ROI). A least squares method(LSM) that fits the echo signal power spectra from a ROI to a 3-parameter tissue model was used to get attenuation coefficient imaging in fatty liver. Since fat's attenuation value is higher than normal liver parenchyma, a reasonable threshold was chosen to evaluate the fatty proportion in fatty liver. Experimental results using clinical data of fatty liver illustrate that the least squares method can get accurate attenuation estimates. It is proved that the attenuation values have a positive correlation with the fatty proportion, which can be used to evaluate the syndrome of fatty liver. PMID:25226986

  20. Geodesic least squares regression for scaling studies in magnetic confinement fusion

    SciTech Connect

    Verdoolaege, Geert

    2015-01-13

    In regression analyses for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. However, concerns have been raised with respect to several assumptions underlying OLS in its application to scaling laws. We here discuss a new regression method that is robust in the presence of significant uncertainty on both the data and the regression model. The method, which we call geodesic least squares regression (GLS), is based on minimization of the Rao geodesic distance on a probabilistic manifold. We demonstrate the superiority of the method using synthetic data and we present an application to the scaling law for the power threshold for the transition to the high confinement regime in magnetic confinement fusion devices.

  1. Least squares algorithm for region-of-interest evaluation in emission tomography

    SciTech Connect

    Formiconi, A.R. . Dipt. di Fisiopatologia Clinica)

    1993-03-01

    In a simulation study, the performances of the least squares algorithm applied to region-of-interest evaluation were studied. The least squares algorithm is a direct algorithm which does not require any iterative computation scheme and also provides estimates of statistical uncertainties of the region-of-interest values (covariance matrix). A model of physical factors, such as system resolution, attenuation and scatter, can be specified in the algorithm. In this paper an accurate model of the non-stationary geometrical response of a camera-collimator system was considered. The algorithm was compared with three others which are specialized for region-of-interest evaluation, as well as with the conventional method of summing the reconstructed quantity over the regions of interest. For the latter method, two algorithms were used for image reconstruction; these included filtered back projection and conjugate gradient least squares with the model of nonstationary geometrical response. For noise-free data and for regions of accurate shape least squares estimates were unbiased within roundoff errors. For noisy data, estimates were still unbiased but precision worsened for regions smaller than resolution: simulating typical statistics of brain perfusion studies performed with a collimated camera, the estimated standard deviation for a 1 cm square region was 10% with an ultra high-resolution collimator and 7% with a low energy all purpose collimator. Conventional region-of-interest estimates showed comparable precision but were heavily biased if filtered back projection was employed for image reconstruction. Using the conjugate gradient iterative algorithm and the model of nonstationary geometrical response, bias of estimates decreased on increasing the number of iterations, but precision worsened thus achieving an estimated standard deviation of more than 25% for the same 1 cm region.

  2. Mobile Location Using Improved Covariance Shaping Least-Squares Estimation in Cellular Systems

    NASA Astrophysics Data System (ADS)

    Chang, Ann-Chen; Lee, Yu-Hong

    This Letter deals with the problem of non-line-of-sight (NLOS) in cellular systems devoted to location purposes. In conjugation with a variable loading technique, we present an efficient technique to make covariance shaping least squares estimator has robust capabilities against the NLOS effects. Compared with other methods, the proposed improved estimator has high accuracy under white Gaussian measurement noises and NLOS effects.

  3. Path model analyzed with ordinary least squares multiple regression versus LISREL.

    PubMed

    Kline, T J; Klammer, J D

    2001-03-01

    The data of a specified path model using the variables of voice, perceived organizational support, being heard, and procedural justice were subjected to the two separate structural equation modeling analytic techniques--that of ordinary least squares regression and LISREL. A comparison of the results and differences between the analyses is discussed, with the LISREL approach being stronger from both theoretical and statistical perspectives. PMID:11403343

  4. Optimal Knot Selection for Least-squares Fitting of Noisy Data with Spline Functions

    SciTech Connect

    Jerome Blair

    2008-05-15

    An automatic data-smoothing algorithm for data from digital oscilloscopes is described. The algorithm adjusts the bandwidth of the filtering as a function of time to provide minimum mean squared error at each time. It produces an estimate of the root-mean-square error as a function of time and does so without any statistical assumptions about the unknown signal. The algorithm is based on least-squares fitting to the data of cubic spline functions.

  5. Least-Squares Approximation of an Improper by a Proper Correlation Matrix Using a Semi-Infinite Convex Program. Research Report 87-7.

    ERIC Educational Resources Information Center

    Knol, Dirk L.; ten Berge, Jos M. F.

    An algorithm is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. The proposed algorithm is based on a solution for C. I. Mosier's oblique Procrustes rotation problem offered by J. M. F. ten Berge and K. Nevels (1977). It is shown that the minimization problem…

  6. Weighted least-squares algorithm for phase unwrapping based on confidence level in frequency domain

    NASA Astrophysics Data System (ADS)

    Wang, Shaohua; Yu, Jie; Yang, Cankun; Jiao, Shuai; Fan, Jun; Wan, Yanyan

    2015-12-01

    Phase unwrapping is a key step in InSAR (Synthetic Aperture Radar Interferometry) processing, and its result may directly affect the accuracy of DEM (Digital Elevation Model) and ground deformation. However, the decoherence phenomenon such as shadows and layover, in the area of severe land subsidence where the terrain is steep and the slope changes greatly, will cause error transmission in the differential wrapped phase information, leading to inaccurate unwrapping phase. In order to eliminate the effect of the noise and reduce the effect of less sampling which caused by topographical factors, a weighted least-squares method based on confidence level in frequency domain is used in this study. This method considered to express the terrain slope in the interferogram as the partial phase frequency in range and azimuth direction, then integrated them into the confidence level. The parameter was used as the constraints of the nonlinear least squares phase unwrapping algorithm, to smooth the un-requirements unwrapped phase gradient and improve the accuracy of phase unwrapping. Finally, comparing with interferometric data of the Beijing subsidence area obtained from TerraSAR verifies that the algorithm has higher accuracy and stability than the normal weighted least-square phase unwrapping algorithms, and could consider to terrain factors.

  7. Multivariat least-squares methods applied to the quantitative spectral analysis of multicomponent samples

    SciTech Connect

    Haaland, D.M.; Easterling, R.G.; Vopicka, D.A.

    1985-01-01

    In an extension of earlier work, weighted multivariate least-squares methods of quantitative FT-IR analysis have been developed. A linear least-squares approximation to nonlinearities in the Beer-Lambert law is made by allowing the reference spectra to be a set of known mixtures, The incorporation of nonzero intercepts in the relation between absorbance and concentration further improves the approximation of nonlinearities while simultaneously accounting for nonzero spectra baselines. Pathlength variations are also accommodated in the analysis, and under certain conditions, unknown sample pathlengths can be determined. All spectral data are used to improve the precision and accuracy of the estimated concentrations. During the calibration phase of the analysis, pure component spectra are estimated from the standard mixture spectra. These can be compared with the measured pure component spectra to determine which vibrations experience nonlinear behavior. In the predictive phase of the analysis, the calculated spectra are used in our previous least-squares analysis to estimate sample component concentrations. These methods were applied to the analysis of the IR spectra of binary mixtures of esters. Even with severely overlapping spectral bands and nonlinearities in the Beer-Lambert law, the average relative error in the estimated concentration was <1%.

  8. Partial Least Squares (PLS) methods for neuroimaging: a tutorial and review.

    PubMed

    Krishnan, Anjali; Williams, Lynne J; McIntosh, Anthony Randal; Abdi, Hervé

    2011-05-15

    Partial Least Squares (PLS) methods are particularly suited to the analysis of relationships between measures of brain activity and of behavior or experimental design. In neuroimaging, PLS refers to two related methods: (1) symmetric PLS or Partial Least Squares Correlation (PLSC), and (2) asymmetric PLS or Partial Least Squares Regression (PLSR). The most popular (by far) version of PLS for neuroimaging is PLSC. It exists in several varieties based on the type of data that are related to brain activity: behavior PLSC analyzes the relationship between brain activity and behavioral data, task PLSC analyzes how brain activity relates to pre-defined categories or experimental design, seed PLSC analyzes the pattern of connectivity between brain regions, and multi-block or multi-table PLSC integrates one or more of these varieties in a common analysis. PLSR, in contrast to PLSC, is a predictive technique which, typically, predicts behavior (or design) from brain activity. For both PLS methods, statistical inferences are implemented using cross-validation techniques to identify significant patterns of voxel activation. This paper presents both PLS methods and illustrates them with small numerical examples and typical applications in neuroimaging. PMID:20656037

  9. Local Least Squares Spectral Filtering and Combination by Harmonic Functions on the Sphere

    NASA Astrophysics Data System (ADS)

    Sjöberg, L.

    2011-01-01

    Least squares spectral combination is a well-known technique in physical geodesy. The established technique either suffers from the assumption of no correlations of errors between degrees or from a global optimisation of the variance or mean square error of the estimator. Today Earth gravitational models are available together with their full covariance matrices to rather high degrees, extra information that should be properly taken care of. Here we derive the local least squares spectral filter for a stochastic function on the sphere based on the spectral representation of the observable and its error covariance matrix. Second, the spectral combination of two erroneous harmonic series is derived based on their full covariance matrices. In both cases the transition from spectral representation of an estimator to an integral representation is demonstrated. Practical examples are given. Taking advantage of the full covariance matrices in the spectral combination implies a huge computational burden in determining the least squares filters and combinations for high-degree spherical harmonic series. A reasonable compromise between accuracy of estimator and workload could be to consider only one weight parameter/degree, yielding the optimum filtering and combination of Laplace series.

  10. Analysis and computation of a least-squares method for consistent mesh tying

    NASA Astrophysics Data System (ADS)

    Day, David; Bochev, Pavel

    2008-08-01

    In the finite element method, a standard approach to mesh tying is to apply Lagrange multipliers. If the interface is curved, however, discretization generally leads to adjoining surfaces that do not coincide spatially. Straightforward Lagrange multiplier methods lead to discrete formulations failing a first-order patch test [T.A. Laursen, M.W. Heinstein, Consistent mesh-tying methods for topologically distinct discretized surfaces in non-linear solid mechanics, Internat. J. Numer. Methods Eng. 57 (2003) 1197-1242]. This paper presents a theoretical and computational study of a least-squares method for mesh tying [P. Bochev, D.M. Day, A least-squares method for consistent mesh tying, Internat. J. Numer. Anal. Modeling 4 (2007) 342-352], applied to the partial differential equation -[backward difference]2[phi]+[alpha][phi]=f. We prove optimal convergence rates for domains represented as overlapping subdomains and show that the least-squares method passes a patch test of the order of the finite element space by construction. To apply the method to subdomain configurations with gaps and overlaps we use interface perturbations to eliminate the gaps. Theoretical error estimates are illustrated by numerical experiments.

  11. Semi-supervised least squares support vector machine algorithm: application to offshore oil reservoir

    NASA Astrophysics Data System (ADS)

    Luo, Wei-Ping; Li, Hong-Qi; Shi, Ning

    2016-06-01

    At the early stages of deep-water oil exploration and development, fewer and further apart wells are drilled than in onshore oilfields. Supervised least squares support vector machine algorithms are used to predict the reservoir parameters but the prediction accuracy is low. We combined the least squares support vector machine (LSSVM) algorithm with semi-supervised learning and established a semi-supervised regression model, which we call the semi-supervised least squares support vector machine (SLSSVM) model. The iterative matrix inversion is also introduced to improve the training ability and training time of the model. We use the UCI data to test the generalization of a semi-supervised and a supervised LSSVM models. The test results suggest that the generalization performance of the LSSVM model greatly improves and with decreasing training samples the generalization performance is better. Moreover, for small-sample models, the SLSSVM method has higher precision than the semi-supervised K-nearest neighbor (SKNN) method. The new semisupervised LSSVM algorithm was used to predict the distribution of porosity and sandstone in the Jingzhou study area.

  12. A novel approach to the experimental study on methane/steam reforming kinetics using the Orthogonal Least Squares method

    NASA Astrophysics Data System (ADS)

    Sciazko, Anna; Komatsu, Yosuke; Brus, Grzegorz; Kimijima, Shinji; Szmyd, Janusz S.

    2014-09-01

    For a mathematical model based on the result of physical measurements, it becomes possible to determine their influence on the final solution and its accuracy. However, in classical approaches, the influence of different model simplifications on the reliability of the obtained results are usually not comprehensively discussed. This paper presents a novel approach to the study of methane/steam reforming kinetics based on an advanced methodology called the Orthogonal Least Squares method. The kinetics of the reforming process published earlier are divergent among themselves. To obtain the most probable values of kinetic parameters and enable direct and objective model verification, an appropriate calculation procedure needs to be proposed. The applied Generalized Least Squares (GLS) method includes all the experimental results into the mathematical model which becomes internally contradicted, as the number of equations is greater than number of unknown variables. The GLS method is adopted to select the most probable values of results and simultaneously determine the uncertainty coupled with all the variables in the system. In this paper, the evaluation of the reaction rate after the pre-determination of the reaction rate, which was made by preliminary calculation based on the obtained experimental results over a Nickel/Yttria-stabilized Zirconia catalyst, was performed.

  13. A least-squares minimisation approach to depth determination from numerical second horizontal self-potential anomalies

    NASA Astrophysics Data System (ADS)

    Abdelrahman, El-Sayed Mohamed; Soliman, Khalid; Essa, Khalid Sayed; Abo-Ezz, Eid Ragab; El-Araby, Tarek Mohamed

    2009-06-01

    This paper develops a least-squares minimisation approach to determine the depth of a buried structure from numerical second horizontal derivative anomalies obtained from self-potential (SP) data using filters of successive window lengths. The method is based on using a relationship between the depth and a combination of observations at symmetric points with respect to the coordinate of the projection of the centre of the source in the plane of the measurement points with a free parameter (graticule spacing). The problem of depth determination from second derivative SP anomalies has been transformed into the problem of finding a solution to a non-linear equation of the form f(z)=0. Formulas have been derived for horizontal cylinders, spheres, and vertical cylinders. Procedures are also formulated to determine the electric dipole moment and the polarization angle. The proposed method was tested on synthetic noisy and real SP data. In the case of the synthetic data, the least-squares method determined the correct depths of the sources. In the case of practical data (SP anomalies over a sulfide ore deposit, Sariyer, Turkey and over a Malachite Mine, Jefferson County, Colorado, USA), the estimated depths of the buried structures are in good agreement with the results obtained from drilling and surface geology.

  14. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    SciTech Connect

    Santos-Villalobos, Hector J; Gregor, Jens; Bingham, Philip R

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  15. A simple suboptimal least-squares algorithm for attitude determination with multiple sensors

    NASA Technical Reports Server (NTRS)

    Brozenec, Thomas F.; Bender, Douglas J.

    1994-01-01

    Three-axis attitude determination is equivalent to finding a coordinate transformation matrix which transforms a set of reference vectors fixed in inertial space to a set of measurement vectors fixed in the spacecraft. The attitude determination problem can be expressed as a constrained optimization problem. The constraint is that a coordinate transformation matrix must be proper, real, and orthogonal. A transformation matrix can be thought of as optimal in the least-squares sense if it maps the measurement vectors to the reference vectors with minimal 2-norm errors and meets the above constraint. This constrained optimization problem is known as Wahba's problem. Several algorithms which solve Wahba's problem exactly have been developed and used. These algorithms, while steadily improving, are all rather complicated. Furthermore, they involve such numerically unstable or sensitive operations as matrix determinant, matrix adjoint, and Newton-Raphson iterations. This paper describes an algorithm which minimizes Wahba's loss function, but without the constraint. When the constraint is ignored, the problem can be solved by a straightforward, numerically stable least-squares algorithm such as QR decomposition. Even though the algorithm does not explicitly take the constraint into account, it still yields a nearly orthogonal matrix for most practical cases; orthogonality only becomes corrupted when the sensor measurements are very noisy, on the same order of magnitude as the attitude rotations. The algorithm can be simplified if the attitude rotations are small enough so that the approximation sin(theta) approximately equals theta holds. We then compare the computational requirements for several well-known algorithms. For the general large-angle case, the QR least-squares algorithm is competitive with all other know algorithms and faster than most. If attitude rotations are small, the least-squares algorithm can be modified to run faster, and this modified algorithm is

  16. L2CXCV: A Fortran 77 package for least squares convex/concave data smoothing

    NASA Astrophysics Data System (ADS)

    Demetriou, I. C.

    2006-04-01

    Fortran 77 software is given for least squares smoothing to data values contaminated by random errors subject to one sign change in the second divided differences of the smoothed values, where the location of the sign change is also unknown of the optimization problem. A highly useful description of the constraints is that they follow from the assumption of initially increasing and subsequently decreasing rates of change, or vice versa, of the process considered. The underlying algorithm partitions the data into two disjoint sets of adjacent data and calculates the required fit by solving a strictly convex quadratic programming problem for each set. The piecewise linear interpolant to the fit is convex on the first set and concave on the other one. The partition into suitable sets is achieved by a finite iterative algorithm, which is made quite efficient because of the interactions of the quadratic programming problems on consecutive data. The algorithm obtains the solution by employing no more quadratic programming calculations over subranges of data than twice the number of the divided differences constraints. The quadratic programming technique makes use of active sets and takes advantage of a B-spline representation of the smoothed values that allows some efficient updating procedures. The entire code required to implement the method is 2920 Fortran lines. The package has been tested on a variety of data sets and it has performed very efficiently, terminating in an overall number of active set changes over subranges of data that is only proportional to the number of data. The results suggest that the package can be used for very large numbers of data values. Some examples with output are provided to help new users and exhibit certain features of the software. Important applications of the smoothing technique may be found in calculating a sigmoid approximation, which is a common topic in various contexts in applications in disciplines like physics, economics

  17. A phase-based hybridizable discontinuous Galerkin method for the numerical solution of the Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Nguyen, N. C.; Peraire, J.; Reitich, F.; Cockburn, B.

    2015-06-01

    We introduce a new hybridizable discontinuous Galerkin (HDG) method for the numerical solution of the Helmholtz equation over a wide range of wave frequencies. Our approach combines the HDG methodology with geometrical optics in a fashion that allows us to take advantage of the strengths of these two methodologies. The phase-based HDG method is devised as follows. First, we enrich the local approximation spaces with precomputed phases which are solutions of the eikonal equation in geometrical optics. Second, we propose a novel scheme that combines the HDG method with ray tracing to compute multivalued solution of the eikonal equation. Third, we utilize the proper orthogonal decomposition to remove redundant modes and obtain locally orthogonal basis functions which are then used to construct the global approximation spaces of the phase-based HDG method. And fourth, we propose an appropriate choice of the stabilization parameter to guarantee stability and accuracy for the proposed method. Numerical experiments presented show that optimal orders of convergence are achieved, that the number of degrees of freedom to achieve a given accuracy is independent of the wave number, and that the number of unknowns required to achieve a given accuracy with the proposed method is orders of magnitude smaller than that with the standard finite element method.

  18. Comparison of Response Surface Construction Methods for Derivative Estimation Using Moving Least Squares, Kriging and Radial Basis Functions

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2005-01-01

    Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.

  19. Signs of divided differences yield least squares data fitting with constrained monotonicity or convexity

    NASA Astrophysics Data System (ADS)

    Demetriou, I. C.

    2002-09-01

    Methods are presented for least squares data smoothing by using the signs of divided differences of the smoothed values. Professor M.J.D. Powell initiated the subject in the early 1980s and since then, theory, algorithms and FORTRAN software make it applicable to several disciplines in various ways. Let us consider n data measurements of a univariate function which have been altered by random errors. Then it is usual for the divided differences of the measurements to show sign alterations, which are probably due to data errors. We make the least sum of squares change to the measurements, by requiring the sequence of divided differences of order m to have at most q sign changes for some prescribed integer q. The positions of the sign changes are integer variables of the optimization calculation, which implies a combinatorial problem whose solution can require about O(nq) quadratic programming calculations in n variables and n-m constraints. Suitable methods have been developed for the following cases. It has been found that a dynamic programming procedure can calculate the global minimum for the important cases of piecewise monotonicity m=1,q[greater-or-equal, slanted]1 and piecewise convexity/concavity m=2,q[greater-or-equal, slanted]1 of the smoothed values. The complexity of the procedure in the case of m=1 is O(n2+qn log2 n) computer operations, while it is reduced to only O(n) when q=0 (monotonicity) and q=1 (increasing/decreasing monotonicity). The case m=2,q[greater-or-equal, slanted]1 requires O(qn2) computer operations and n2 quadratic programming calculations, which is reduced to one and n-2 quadratic programming calculations when m=2,q=0, i.e. convexity, and m=2,q=1, i.e. convexity/concavity, respectively. Unfortunately, the technique that receives this efficiency cannot generalize for the highly nonlinear case m[greater-or-equal, slanted]3,q[greater-or-equal, slanted]2. However, the case m[greater-or-equal, slanted]3,q=0 is solved by a special strictly

  20. Application of Least-Squares Adjustment Technique to Geometric Camera Calibration and Photogrammetric Flow Visualization

    NASA Technical Reports Server (NTRS)

    Chen, Fang-Jenq

    1997-01-01

    Flow visualization produces data in the form of two-dimensional images. If the optical components of a camera system are perfect, the transformation equations between the two-dimensional image and the three-dimensional object space are linear and easy to solve. However, real camera lenses introduce nonlinear distortions that affect the accuracy of transformation unless proper corrections are applied. An iterative least-squares adjustment algorithm is developed to solve the nonlinear transformation equations incorporated with distortion corrections. Experimental applications demonstrate that a relative precision on the order of 40,000 is achievable without tedious laboratory calibrations of the camera.

  1. Least squares support vector machines for direction of arrival estimation with error control and validation.

    SciTech Connect

    Christodoulou, Christos George (University of New Mexico, Albuquerque, NM); Abdallah, Chaouki T. (University of New Mexico, Albuquerque, NM); Rohwer, Judd Andrew

    2003-02-01

    The paper presents a multiclass, multilabel implementation of least squares support vector machines (LS-SVM) for direction of arrival (DOA) estimation in a CDMA system. For any estimation or classification system, the algorithm's capabilities and performance must be evaluated. Specifically, for classification algorithms, a high confidence level must exist along with a technique to tag misclassifications automatically. The presented learning algorithm includes error control and validation steps for generating statistics on the multiclass evaluation path and the signal subspace dimension. The error statistics provide a confidence level for the classification accuracy.

  2. A comparison of three additive tree algorithms that rely on a least-squares loss criterion.

    PubMed

    Smith, T J

    1998-11-01

    The performances of three additive tree algorithms which seek to minimize a least-squares loss criterion were compared. The algorithms included the penalty-function approach of De Soete (1983), the iterative projection strategy of Hubert & Arabie (1995) and the two-stage ADDTREE algorithm, (Corter, 1982; Sattath & Tversky, 1977). Model fit, comparability of structure, processing time and metric recovery were assessed. Results indicated that the iterative projection strategy consistently located the best-fitting tree, but also displayed a wider range and larger number of local optima. PMID:9854946

  3. Retinal Oximetry with 510-600 nm Light Based on Partial Least-Squares Regression Technique

    NASA Astrophysics Data System (ADS)

    Arimoto, Hidenobu; Furukawa, Hiromitsu

    2010-11-01

    The oxygen saturation distribution in the retinal blood stream is estimated by measuring spectral images and adopting the partial-least squares regression. The wavelengths range used for the calculation is from 510 to 600 nm. The regression model for estimating the retinal oxygen saturation is built on the basis of the arterial and venous blood spectra. The experiment is performed using an originally designed spectral ophthalmoscope. The obtained two-dimensional (2D) oxygen saturation indicates the reasonable oxygen level across the retina. The measurement quality is compared with those obtained using other wavelengths sets and data processing methods.

  4. Review of the Palisades pressure vessel accumulated fluence estimate and of the least squares methodology employed

    SciTech Connect

    Griffin, P.J.

    1998-05-01

    This report provides a review of the Palisades submittal to the Nuclear Regulatory Commission requesting endorsement of their accumulated neutron fluence estimates based on a least squares adjustment methodology. This review highlights some minor issues in the applied methodology and provides some recommendations for future work. The overall conclusion is that the Palisades fluence estimation methodology provides a reasonable approach to a {open_quotes}best estimate{close_quotes} of the accumulated pressure vessel neutron fluence and is consistent with the state-of-the-art analysis as detailed in community consensus ASTM standards.

  5. Small-kernel, constrained least-squares restoration of sampled image data

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Park, Stephen K.

    1992-01-01

    Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.

  6. 3-D foliation unfolding with volume and bed-length least-squares conservation

    SciTech Connect

    Leger, M.; Morvan, J.M.; Thibaut, M.

    1994-12-31

    Restoration of a geologic structure at earlier times is a good means to criticize, and next to improve, its interpretation. Restoration softwares already exist in 2D, but a lot of work remains to be done in 3D. The authors focus on the interbedding slip phenomenon, with bed-length and volume conservation. They unfold a (geometrical) foliation by optimizing following least-squares criteria: horizontalness, bed-length and volume conservation, under equality constraints related to the position of the ``binding`` or ``pin-surface``

  7. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  8. In situ ply strength: An initial assessment. [using laminate fracture data and a least squares method

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Sullivan, T. L.

    1978-01-01

    The in situ ply strengths in several composites were calculated using laminate fracture data for appropriate low modulus, and high modulus fiber composites were used in conjunction with the least squares method. The laminate fracture data were obtained from tests on Modmor-I graphite/epoxy, AS-graphite/epoxy, boron/epoxy and E-glass/epoxy. The results show that the calculated in situ ply strengths can be considerably different from those measured in unidirectional composites, especially the transverse strengths and those in angleplied laminates with transply cracks.

  9. STRITERFIT, a least-squares pharmacokinetic curve-fitting package using a programmable calculator.

    PubMed

    Thornhill, D P; Schwerzel, E

    1985-05-01

    A program is described that permits iterative least-squares nonlinear regression fitting of polyexponential curves using the Hewlett Packard HP 41 CV programmable calculator. The program enables the analysis of pharmacokinetic drug level profiles with a high degree of precision. Up to 15 data pairs can be used, and initial estimates of curve parameters are obtained with a stripping procedure. Up to four exponential terms can be accommodated by the program, and there is the option of weighting data according to their reciprocals. Initial slopes cannot be forced through zero. The program may be interrupted at any time in order to examine convergence. PMID:3839530

  10. Weighted least square estimates of the parameters of a model of survivorship probabilities.

    PubMed

    Mitra, S

    1987-06-01

    "A weighted regression has been fitted to estimate the parameters of a model involving functions of survivorship probability and age. Earlier, the parameters were estimated by the method of ordinary least squares and the results were very encouraging. However, a multiple regression equation passing through the origin has been found appropriate for the present model from statistical consideration. Fortunately, this method, while methodologically more sophisticated, has a slight edge over the former as evidenced by the respective measures of reproducibility in the model and actual life tables selected for this study." PMID:12281212

  11. Discontinuous Galerkin solution of the Navier-Stokes equations on deformable domains

    SciTech Connect

    Persson, P.-O.; Bonet, J.; Peraire, J.

    2009-01-13

    We describe a method for computing time-dependent solutions to the compressible Navier-Stokes equations on variable geometries. We introduce a continuous mapping between a fixed reference configuration and the time varying domain, By writing the Navier-Stokes equations as a conservation law for the independent variables in the reference configuration, the complexity introduced by variable geometry is reduced to solving a transformed conservation law in a fixed reference configuration, The spatial discretization is carried out using the Discontinuous Galerkin method on unstructured meshes of triangles, while the time integration is performed using an explicit Runge-Kutta method, For general domain changes, the standard scheme fails to preserve exactly the free-stream solution which leads to some accuracy degradation, especially for low order approximations. This situation is remedied by adding an additional equation for the time evolution of the transformation Jacobian to the original conservation law and correcting for the accumulated metric integration errors. A number of results are shown to illustrate the flexibility of the approach to handle high order approximations on complex geometries.

  12. Numerical simulation of flows around two circular cylinders by mesh-free least square-based finite difference methods

    NASA Astrophysics Data System (ADS)

    Ding, H.; Shu, C.; Yeo, K. S.; Xu, D.

    2007-01-01

    In this paper, the mesh-free least square-based finite difference (MLSFD) method is applied to numerically study the flow field around two circular cylinders arranged in side-by-side and tandem configurations. For each configuration, various geometrical arrangements are considered, in order to reveal the different flow regimes characterized by the gap between the two cylinders. In this work, the flow simulations are carried out in the low Reynolds number range, that is, Re=100 and 200. Instantaneous vorticity contours and streamlines around the two cylinders are used as the visualization aids. Some flow parameters such as Strouhal number, drag and lift coefficients calculated from the solution are provided and quantitatively compared with those provided by other researchers.

  13. Comparison of approaches for parameter estimation on stochastic models: Generic least squares versus specialized approaches.

    PubMed

    Zimmer, Christoph; Sahle, Sven

    2016-04-01

    Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences. PMID:26826353

  14. Quantifying silica in filter-deposited mine dusts using infrared spectra and partial least squares regression.

    PubMed

    Weakley, Andrew Todd; Miller, Arthur L; Griffiths, Peter R; Bayman, Sean J

    2014-07-01

    The feasibility of measuring airborne crystalline silica (α-quartz) in noncoal mine dusts using a direct-on-filter method of analysis is demonstrated. Respirable α-quartz was quantified by applying a partial least squares (PLS) regression to the infrared transmission spectra of mine-dust samples deposited on porous polymeric filters. This direct-on-filter method deviates from the current regulatory determination of respirable α-quartz by refraining from ashing the sampling filter and redepositing the analyte prior to quantification using either infrared spectrometry for coal mines or x-ray diffraction (XRD) from noncoal mines. Since XRD is not field portable, this study evaluated the efficacy of Fourier transform infrared spectrometry for silica determination in noncoal mine dusts. PLS regressions were performed using select regions of the spectra from nonashed samples with important wavenumbers selected using a novel modification to the Monte Carlo unimportant variable elimination procedure. Wavenumber selection helped to improve PLS prediction, reduce the number of required PLS factors, and identify additional silica bands distinct from those currently used in regulatory enforcement. PLS regression appeared robust against the influence of residual filter and extraneous mineral absorptions while outperforming ordinary least squares calibration. These results support the quantification of respirable silica in noncoal mines using field-portable infrared spectrometers. PMID:24830397

  15. Optimization of Active Muscle Force-Length Models Using Least Squares Curve Fitting.

    PubMed

    Mohammed, Goran Abdulrahman; Hou, Ming

    2016-03-01

    The objective of this paper is to propose an asymmetric Gaussian function as an alternative to the existing active force-length models, and to optimize this model along with several other existing models by using the least squares curve fitting method. The minimal set of coefficients is identified for each of these models to facilitate the least squares curve fitting. Sarcomere simulated data and one set of rabbits extensor digitorum II experimental data are used to illustrate optimal curve fitting of the selected force-length functions. The results shows that all the curves fit reasonably well with the simulated and experimental data, while the Gordon-Huxley-Julian model and asymmetric Gaussian function are better than other functions in terms of statistical test scores root mean squared error and R-squared. However, the differences in RMSE scores are insignificant (0.3-6%) for simulated data and (0.2-5%) for experimental data. The proposed asymmetric Gaussian model and the method of parametrization of this and the other force-length models mentioned above can be used in the studies on active force-length relationships of skeletal muscles that generate forces to cause movements of human and animal bodies. PMID:26276984

  16. Online segmentation of time series based on polynomial least-squares approximations.

    PubMed

    Fuchs, Erich; Gruber, Thiemo; Nitschke, Jiri; Sick, Bernhard

    2010-12-01

    The paper presents SwiftSeg, a novel technique for online time series segmentation and piecewise polynomial representation. The segmentation approach is based on a least-squares approximation of time series in sliding and/or growing time windows utilizing a basis of orthogonal polynomials. This allows the definition of fast update steps for the approximating polynomial, where the computational effort depends only on the degree of the approximating polynomial and not on the length of the time window. The coefficients of the orthogonal expansion of the approximating polynomial-obtained by means of the update steps-can be interpreted as optimal (in the least-squares sense) estimators for average, slope, curvature, change of curvature, etc., of the signal in the time window considered. These coefficients, as well as the approximation error, may be used in a very intuitive way to define segmentation criteria. The properties of SwiftSeg are evaluated by means of some artificial and real benchmark time series. It is compared to three different offline and online techniques to assess its accuracy and runtime. It is shown that SwiftSeg-which is suitable for many data streaming applications-offers high accuracy at very low computational costs. PMID:20975120

  17. Least-squares finite-element scheme for the lattice Boltzmann method on an unstructured mesh.

    PubMed

    Li, Yusong; LeBoeuf, Eugene J; Basu, P K

    2005-10-01

    A numerical model of the lattice Boltzmann method (LBM) utilizing least-squares finite-element method in space and the Crank-Nicolson method in time is developed. This method is able to solve fluid flow in domains that contain complex or irregular geometric boundaries by using the flexibility and numerical stability of a finite-element method, while employing accurate least-squares optimization. Fourth-order accuracy in space and second-order accuracy in time are derived for a pure advection equation on a uniform mesh; while high stability is implied from a von Neumann linearized stability analysis. Implemented on unstructured mesh through an innovative element-by-element approach, the proposed method requires fewer grid points and less memory compared to traditional LBM. Accurate numerical results are presented through two-dimensional incompressible Poiseuille flow, Couette flow, and flow past a circular cylinder. Finally, the proposed method is applied to estimate the permeability of a randomly generated porous media, which further demonstrates its inherent geometric flexibility. PMID:16383571

  18. Online Least Squares One-Class Support Vector Machines-Based Abnormal Visual Event Detection

    PubMed Central

    Wang, Tian; Chen, Jie; Zhou, Yi; Snoussi, Hichem

    2013-01-01

    The abnormal event detection problem is an important subject in real-time video surveillance. In this paper, we propose a novel online one-class classification algorithm, online least squares one-class support vector machine (online LS-OC-SVM), combined with its sparsified version (sparse online LS-OC-SVM). LS-OC-SVM extracts a hyperplane as an optimal description of training objects in a regularized least squares sense. The online LS-OC-SVM learns a training set with a limited number of samples to provide a basic normal model, then updates the model through remaining data. In the sparse online scheme, the model complexity is controlled by the coherence criterion. The online LS-OC-SVM is adopted to handle the abnormal event detection problem. Each frame of the video is characterized by the covariance matrix descriptor encoding the moving information, then is classified into a normal or an abnormal frame. Experiments are conducted, on a two-dimensional synthetic distribution dataset and a benchmark video surveillance dataset, to demonstrate the promising results of the proposed online LS-OC-SVM method. PMID:24351629

  19. A least square real time quality control routine for the North Warning Netted Radar System

    NASA Astrophysics Data System (ADS)

    Leung, Henry; Blanchette, Martin

    1994-12-01

    The ground surveillance radar group of the Radar and Space Division of DREO has a requirement to investigate the feasibility and propose a cost effective approach of correcting the Real Time Quality Control (RTQC) registration error problem of the North Warning System (NWS). The U.S. developed RTQC algorithm works poorly in northern Canadian radar sites. This is mainly caused by the deficiency of the RTQC algorithm to calculate properly the radar position bias when there is low aircraft traffic in areas of overlapping radar coverage. This problem results in track ambiguity and in display of ghost tracks. In this report, a modification of the RTQC algorithm using least-square techniques is proposed. The proposed least-square RTQC (LS-RTQC) algorithm was tested with real recorded data from the NWS. The LS-RTQC algorithm was found to work efficiently on the NWS data in a sense that it works properly in a low aircraft traffic environment with low computational complexity. The algorithm has been sent to the NORAD software support unit at Tyndall Air Force Base for testing.

  20. [Biomass Compositional Analysis Using Sparse Partial Least Squares Regression and Near Infrared Spectrum Technique].

    PubMed

    Yao, Yan; Wang, Chang-yue; Liu, Hui-jun; Tang, Jian-bin; Cai, Jin-hui; Wang, Jing-jun

    2015-07-01

    Forest bio-fuel, a new type renewable energy, has attracted increasing attention as a promising alternative. In this study, a new method called Sparse Partial Least Squares Regression (SPLS) is used to construct the proximate analysis model to analyze the fuel characteristics of sawdust combining Near Infrared Spectrum Technique. Moisture, Ash, Volatile and Fixed Carbon percentage of 80 samples have been measured by traditional proximate analysis. Spectroscopic data were collected by Nicolet NIR spectrometer. After being filtered by wavelet transform, all of the samples are divided into training set and validation set according to sample category and producing area. SPLS, Principle Component Regression (PCR), Partial Least Squares Regression (PLS) and Least Absolute Shrinkage and Selection Operator (LASSO) are presented to construct prediction model. The result advocated that SPLS can select grouped wavelengths and improve the prediction performance. The absorption peaks of the Moisture is covered in the selected wavelengths, well other compositions have not been confirmed yet. In a word, SPLS can reduce the dimensionality of complex data sets and interpret the relationship between spectroscopic data and composition concentration, which will play an increasingly important role in the field of NIR application. PMID:26717741

  1. On the Least-Squares Fitting of Correlated Data: a Priorivs a PosterioriWeighting

    NASA Astrophysics Data System (ADS)

    Tellinghuisen, Joel

    1996-10-01

    One of the methods in common use for analyzing large data sets is a two-step procedure, in which subsets of the full data are first least-squares fitted to a preliminary set of parameters, and the latter are subsequently merged to yield the final parameters. The second step of this procedure is properly a correlated least-squares fit and requires the variance-covariance matrices from the first step to construct the weight matrix for the merge. There is, however, an ambiguity concerning the manner in which the first-step variance-covariance matrices are assessed, which leads to different statistical properties for the quantities determined in the merge. The issue is one ofa priorivsa posterioriassessment of weights, which is an application of what was originally calledinternalvsexternal consistencyby Birge [Phys. Rev.40,207-227 (1932)] and Deming ("Statistical Adjustment of Data." Dover, New York, 1964). In the present work the simplest case of a merge fit-that of an average as obtained from a global fit vs a two-step fit of partitioned data-is used to illustrate that only in the case of a priori weighting do the results have the usually expected and desired statistical properties: normal distributions for residuals,tdistributions for parameters assessed a posteriori, and χ2distributions for variances.

  2. Data-adapted moving least squares method for 3-D image interpolation

    NASA Astrophysics Data System (ADS)

    Jang, Sumi; Nam, Haewon; Lee, Yeon Ju; Jeong, Byeongseon; Lee, Rena; Yoon, Jungho

    2013-12-01

    In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons.

  3. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization

    PubMed Central

    Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L2 and L1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996

  4. An Augmented Classical Least Squares Method for Quantitative Raman Spectral Analysis against Component Information Loss

    PubMed Central

    Zhou, Yan; Cao, Hui

    2013-01-01

    We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR. PMID:23956689

  5. Determination of Protein Secondary Structure from Infrared Spectra Using Partial Least-Squares Regression.

    PubMed

    Wilcox, Kieaibi E; Blanch, Ewan W; Doig, Andrew J

    2016-07-12

    Infrared (IR) spectra contain substantial information about protein structure. This has previously most often been exploited by using known band assignments. Here, we convert spectral intensities in bins within Amide I and II regions to vectors and apply machine learning methods to determine protein secondary structure. Partial least squares was performed on spectra of 90 proteins in H2O. After preprocessing and removal of outliers, 84 proteins were used for this work. Standard normal variate and second-derivative preprocessing methods on the combined Amide I and II data generally gave the best performance, with root-mean-square values for prediction of ∼12% for α-helix, ∼7% for β-sheet, 7% for antiparallel β-sheet, and ∼8% for other conformations. Analysis of Fourier transform infrared (FTIR) spectra of 16 proteins in D2O showed that secondary structure determination was slightly poorer than in H2O. Interval partial least squares was used to identify the critical regions within spectra for secondary structure prediction and showed that the sides of bands were most valuable, rather than their peak maxima. In conclusion, we have shown that multivariate analysis of protein FTIR spectra can give α-helix, β-sheet, other, and antiparallel β-sheet contents with good accuracy, comparable to that of circular dichroism, which is widely used for this purpose. PMID:27322779

  6. Modeling individual HRTF tensor using high-order partial least squares

    NASA Astrophysics Data System (ADS)

    Huang, Qinghua; Li, Lin

    2014-12-01

    A tensor is used to describe head-related transfer functions (HRTFs) depending on frequencies, sound directions, and anthropometric parameters. It keeps the multi-dimensional structure of measured HRTFs. To construct a multi-linear HRTF personalization model, an individual core tensor is extracted from the original HRTFs using high-order singular value decomposition (HOSVD). The individual core tensor in lower-dimensional space acts as the output of the multi-linear model. Some key anthropometric parameters as the inputs of the model are selected by Laplacian scores and correlation analyses between all the measured parameters and the individual core tensor. Then, the multi-linear regression model is constructed by high-order partial least squares (HOPLS), aiming to seek a joint subspace approximation for both the selected parameters and the individual core tensor. The numbers of latent variables and loadings are used to control the complexity of the model and prevent overfitting feasibly. Compared with the partial least squares regression (PLSR) method, objective simulations demonstrate the better performance for predicting individual HRTFs especially for the sound directions ipsilateral to the concerned ear. The subjective listening tests show that the predicted individual HRTFs are approximate to the measured HRTFs for the sound localization.

  7. Random dynamic load identification based on error analysis and weighted total least squares method

    NASA Astrophysics Data System (ADS)

    Jia, You; Yang, Zhichun; Guo, Ning; Wang, Le

    2015-12-01

    In most cases, random dynamic load identification problems in structural dynamics are in general ill-posed. A common approach to treat these problems is to reformulate these problems into some well-posed problems by some numerical regularization methods. In a previous paper by the authors, a random dynamic load identification model was built, and a weighted regularization approach based on the proper orthogonal decomposition (POD) was proposed to identify the random dynamic loads. In this paper, the upper bound of relative load identification error in frequency domain is derived. The selection condition and the specific form of the weighting matrix is also proposed and validated analytically and experimentally, In order to improve the accuracy of random dynamic load identification, a weighted total least squares method is proposed to reduce the impact of these errors. To further validate the feasibility and effectiveness of the proposed method, the comparative study of the proposed method and other methods are conducted with the experiment. The experimental results demonstrated that the weighted total least squares method is more effective than other methods for random dynamic load identification.

  8. Limited-memory BFGS based least-squares pre-stack Kirchhoff depth migration

    NASA Astrophysics Data System (ADS)

    Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu

    2015-08-01

    Least-squares migration (LSM) is a linearized inversion technique for subsurface reflectivity estimation. Compared to conventional migration algorithms, it can improve spatial resolution significantly with a few iterative calculations. There are three key steps in LSM, (1) calculate data residuals between observed data and demigrated data using the inverted reflectivity model; (2) migrate data residuals to form reflectivity gradient and (3) update reflectivity model using optimization methods. In order to obtain an accurate and high-resolution inversion result, the good estimation of inverse Hessian matrix plays a crucial role. However, due to the large size of Hessian matrix, the inverse matrix calculation is always a tough task. The limited-memory BFGS (L-BFGS) method can evaluate the Hessian matrix indirectly using a limited amount of computer memory which only maintains a history of the past m gradients (often m < 10). We combine the L-BFGS method with least-squares pre-stack Kirchhoff depth migration. Then, we validate the introduced approach by the 2-D Marmousi synthetic data set and a 2-D marine data set. The results show that the introduced method can effectively obtain reflectivity model and has a faster convergence rate with two comparison gradient methods. It might be significant for general complex subsurface imaging.

  9. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization.

    PubMed

    Zhang, Chunyuan; Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996

  10. Non-negative least-squares variance component estimation with application to GPS time series

    NASA Astrophysics Data System (ADS)

    Amiri-Simkooei, A. R.

    2016-05-01

    The problem of negative variance components is probable to occur in many geodetic applications. This problem can be avoided if non-negativity constraints on variance components (VCs) are introduced to the stochastic model. Based on the standard non-negative least-squares (NNLS) theory, this contribution presents the method of non-negative least-squares variance component estimation (NNLS-VCE). The method is easy to understand, simple to implement, and efficient in practice. The NNLS-VCE is then applied to the coordinate time series of the permanent GPS stations to simultaneously estimate the amplitudes of different noise components such as white noise, flicker noise, and random walk noise. If a noise model is unlikely to be present, its amplitude is automatically estimated to be zero. The results obtained from 350 GPS permanent stations indicate that the noise characteristics of the GPS time series are well described by combination of white noise and flicker noise. This indicates that all time series contain positive noise amplitudes for white and flicker noise. In addition, around two-thirds of the series consist of random walk noise, of which its average amplitude is the (small) value of 0.16, 0.13, and 0.45 { mm/year }^{1/2} for the north, east, and up components, respectively. Also, about half of the positive estimated amplitudes of random walk noise are statistically significant, indicating that one-third of the total time series have significant random walk noise.

  11. An Alternating Least Squares Algorithm for Fitting the Two- and Three-Way DEDICOM Model and the IDIOSCAL Model.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.

    1989-01-01

    An alternating least squares algorithm is offered for fitting the DEcomposition into DIrectional COMponents (DEDICOM) model for representing asymmetric relations among a set of objects via a set of coordinates for the objects on a limited number of dimensions. An algorithm is presented for fitting the IDIOSCAL model in the least squares sense.…

  12. Partial least-squares regression for linking land-cover patterns to soil erosion and sediment yield in watersheds

    NASA Astrophysics Data System (ADS)

    Shi, Z. H.; Ai, L.; Li, X.; Huang, X. D.; Wu, G. L.; Liao, W.

    2013-08-01

    There are strong ties between land cover patterns and soil erosion and sediment yield in watersheds. The spatial configuration of land cover has recently become an important aspect of the study of geomorphological processes related to erosion within watersheds. Many studies have used multivariate regression techniques to explore the response of soil erosion and sediment yield to land cover patterns in watersheds. However, many landscape metrics are highly correlated and may result in redundancy, which violates the assumptions of a traditional least-squares approach, thus leading to singular solutions or otherwise biased parameter estimates and confidence intervals. Here, we investigated the landscape patterns within watersheds in the Upper Du River watershed (8973 km2) in China and examined how the spatial patterns of land cover are related to the soil erosion and sediment yield of watersheds using hydrological modeling and partial least-squares regression (PLSR). The results indicate that the watershed soil erosion and sediment yield are closely associated with the land cover patterns. At the landscape level, landscape characteristics, such as Shannon’s diversity index (SHDI), aggregation index (AI), largest patch index (LPI), contagion (CONTAG), and patch cohesion index (COHESION), were identified as the primary metrics controlling the watershed soil erosion and sediment yield. The landscape characteristics in watersheds could account for as much as 65% and 74% of the variation in soil erosion and sediment yield, respectively. Greater interspersion and an increased number of patch land cover types may significantly accelerate soil erosion and increase sediment export. PLSR can be used to simply determine the relationships between land-cover patterns and watershed soil erosion and sediment yield, providing quantitative information to allow decision makers to make better choices regarding landscape planning. With readily available remote sensing data and rapid

  13. Radio astronomical image formation using constrained least squares and Krylov subspaces

    NASA Astrophysics Data System (ADS)

    Mouri Sardarabadi, Ahmad; Leshem, Amir; van der Veen, Alle-Jan

    2016-04-01

    Aims: Image formation for radio astronomy can be defined as estimating the spatial intensity distribution of celestial sources throughout the sky, given an array of antennas. One of the challenges with image formation is that the problem becomes ill-posed as the number of pixels becomes large. The introduction of constraints that incorporate a priori knowledge is crucial. Methods: In this paper we show that in addition to non-negativity, the magnitude of each pixel in an image is also bounded from above. Indeed, the classical "dirty image" is an upper bound, but a much tighter upper bound can be formed from the data using array processing techniques. This formulates image formation as a least squares optimization problem with inequality constraints. We propose to solve this constrained least squares problem using active set techniques, and the steps needed to implement it are described. It is shown that the least squares part of the problem can be efficiently implemented with Krylov-subspace-based techniques. We also propose a method for correcting for the possible mismatch between source positions and the pixel grid. This correction improves both the detection of sources and their estimated intensities. The performance of these algorithms is evaluated using simulations. Results: Based on parametric modeling of the astronomical data, a new imaging algorithm based on convex optimization, active sets, and Krylov-subspace-based solvers is presented. The relation between the proposed algorithm and sequential source removing techniques is explained, and it gives a better mathematical framework for analyzing existing algorithms. We show that by using the structure of the algorithm, an efficient implementation that allows massive parallelism and storage reduction is feasible. Simulations are used to compare the new algorithm to classical CLEAN. Results illustrate that for a discrete point model, the proposed algorithm is capable of detecting the correct number of sources

  14. Prediction of biochar yield from cattle manure pyrolysis via least squares support vector machine intelligent approach.

    PubMed

    Cao, Hongliang; Xin, Ya; Yuan, Qiaoxia

    2016-02-01

    To predict conveniently the biochar yield from cattle manure pyrolysis, intelligent modeling approach was introduced in this research. A traditional artificial neural networks (ANN) model and a novel least squares support vector machine (LS-SVM) model were developed. For the identification and prediction evaluation of the models, a data set with 33 experimental data was used, which were obtained using a laboratory-scale fixed bed reaction system. The results demonstrated that the intelligent modeling approach is greatly convenient and effective for the prediction of the biochar yield. In particular, the novel LS-SVM model has a more satisfying predicting performance and its robustness is better than the traditional ANN model. The introduction and application of the LS-SVM modeling method gives a successful example, which is a good reference for the modeling study of cattle manure pyrolysis process, even other similar processes. PMID:26708483

  15. Quantification of brain lipids by FTIR spectroscopy and partial least squares regression

    NASA Astrophysics Data System (ADS)

    Dreissig, Isabell; Machill, Susanne; Salzer, Reiner; Krafft, Christoph

    2009-01-01

    Brain tissue is characterized by high lipid content. Its content decreases and the lipid composition changes during transformation from normal brain tissue to tumors. Therefore, the analysis of brain lipids might complement the existing diagnostic tools to determine the tumor type and tumor grade. Objective of this work is to extract lipids from gray matter and white matter of porcine brain tissue, record infrared (IR) spectra of these extracts and develop a quantification model for the main lipids based on partial least squares (PLS) regression. IR spectra of the pure lipids cholesterol, cholesterol ester, phosphatidic acid, phosphatidylcholine, phosphatidylethanolamine, phosphatidylserine, phosphatidylinositol, sphingomyelin, galactocerebroside and sulfatide were used as references. Two lipid mixtures were prepared for training and validation of the quantification model. The composition of lipid extracts that were predicted by the PLS regression of IR spectra was compared with lipid quantification by thin layer chromatography.

  16. The Least Squares Stochastic Finite Element Method in Structural Stability Analysis of Steel Skeletal Structures

    NASA Astrophysics Data System (ADS)

    Kamiński, M.; Szafran, J.

    2015-05-01

    The main purpose of this work is to verify the influence of the weighting procedure in the Least Squares Method on the probabilistic moments resulting from the stability analysis of steel skeletal structures. We discuss this issue also in the context of the geometrical nonlinearity appearing in the Stochastic Finite Element Method equations for the stability analysis and preservation of the Gaussian probability density function employed to model the Young modulus of a structural steel in this problem. The weighting procedure itself (with both triangular and Dirac-type) shows rather marginal influence on all probabilistic coefficients under consideration. This hybrid stochastic computational technique consisting of the FEM and computer algebra systems (ROBOT and MAPLE packages) may be used for analogous nonlinear analyses in structural reliability assessment.

  17. The Least-Squares Calibration on the Micro-Arcsecond Metrology Test Bed

    NASA Technical Reports Server (NTRS)

    Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.

    2006-01-01

    The Space Interferometry Mission (S1M) will measure optical path differences (OPDs) with an accuracy of tens of picometers, requiring precise calibration of the instrument. In this article, we present a calibration approach based on fitting star light interference fringes in the interferometer using a least-squares algorithm. The algorithm is first analyzed for the case of a monochromatic light source with a monochromatic fringe model. Using fringe data measured on the Micro-Arcsecond Metrology (MAM) testbed with a laser source, the error in the determination of the wavelength is shown to be less than 10pm. By using a quasi-monochromatic fringe model, the algorithm can be extended to the case of a white light source with a narrow detection bandwidth. In SIM, because of the finite bandwidth of each CCD pixel, the effect of the fringe envelope can not be neglected, especially for the larger optical path difference range favored for the wavelength calibration.

  18. a Robust Pct Method Based on Complex Least Squares Adjustment Method

    NASA Astrophysics Data System (ADS)

    Haiqiang, F.; Jianjun, Z.; Changcheng, W.; Qinghua, X.; Rong, Z.

    2013-07-01

    Polarization Coherence Tomography (PCT) method has the good performance in deriving the vegetation vertical structure. However, Errors caused by temporal decorrelation and vegetation height and ground phase always propagate to the data analysis and contaminate the results. In order to overcome this disadvantage, we exploit Complex Least Squares Adjustment Method to compute vegetation height and ground phase based on Random Volume over Ground and Volume Temporal Decorrelation (RVoG + VTD) model. By the fusion of different polarimetric InSAR data, we can use more observations to obtain more robust estimations of temporal decorrelation and vegetation height, and then, we introduce them into PCT to acquire more accurate vegetation vertical structure. Finally the new approach is validated on E-SAR data of Oberpfaffenhofen, Germany. The results demonstrate that the robust method can greatly improve accusation of vegetation vertical structure.

  19. Elemental PGNAA analysis using gamma-gamma coincidence counting with the library least-squares approach

    NASA Astrophysics Data System (ADS)

    Metwally, Walid A.; Gardner, Robin P.; Mayo, Charles W.

    2004-01-01

    An accurate method for determining elemental analysis using gamma-gamma coincidence counting is presented. To demonstrate the feasibility of this method for PGNAA, a system of three radioisotopes (Na-24, Co-60 and Cs-134) that emit coincident gamma rays was used. Two HPGe detectors were connected to a system that allowed both singles and coincidences to be collected simultaneously. A known mixture of the three radioisotopes was used and data was deliberately collected at relatively high counting rates to determine the effect of pulse pile-up distortion. The results obtained, with the library least-squares analysis, of both the normal and coincidence counting are presented and compared to the known amounts. The coincidence results are shown to give much better accuracy. It appears that in addition to the expected advantage of reduced background, the coincidence approach is considerably more resistant to pulse pile-up distortion.

  20. Testing of Lagrange multiplier damped least-squares control algorithm for woofer-tweeter adaptive optics

    PubMed Central

    Zou, Weiyao; Burns, Stephen A.

    2012-01-01

    A Lagrange multiplier-based damped least-squares control algorithm for woofer-tweeter (W-T) dual deformable-mirror (DM) adaptive optics (AO) is tested with a breadboard system. We show that the algorithm can complementarily command the two DMs to correct wavefront aberrations within a single optimization process: the woofer DM correcting the high-stroke, low-order aberrations, and the tweeter DM correcting the low-stroke, high-order aberrations. The optimal damping factor for a DM is found to be the median of the eigenvalue spectrum of the influence matrix of that DM. Wavefront control accuracy is maximized with the optimized control parameters. For the breadboard system, the residual wavefront error can be controlled to the precision of 0.03 μm in root mean square. The W-T dual-DM AO has applications in both ophthalmology and astronomy. PMID:22441462

  1. UNIMAP: a generalized least-squares map maker for Herschel data

    NASA Astrophysics Data System (ADS)

    Piazzo, Lorenzo; Calzoletti, Luca; Faustini, Fabiana; Pestalozzi, Michele; Pezzuto, Stefano; Elia, Davide; di Giorgio, Anna; Molinari, Sergio

    2015-02-01

    The Herschel space telescope hosts two infrared photometers having an unprecedented resolution, sensitivity and dynamic range. The map making, i.e. the formation of sky images from the instruments' data, is critical for the full exploitation of the satellite and is a difficult task, since the readouts are affected by several disturbances, most notably by correlated noise. An effective map making approach is based on generalized least squares (GLS). However, when applied to Herschel data this approach poses several challenges and requires a specific pre- and post-processing. In the paper, we describe these challenges and introduce a set of algorithms and procedures which successfully address the issues. We also describe the implementation of the procedures and how these are integrated into an image formation software called UNIMAP, which is the first GLS map maker capable of automatically producing quality Herschel images with manageable memory and complexity requirements.

  2. Least squares parameter estimation methods for material decomposition with energy discriminating detectors

    SciTech Connect

    Le, Huy Q.; Molloi, Sabee

    2011-01-15

    Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar to the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg/ml) and iodine (4, 12, 20, 28, 36, and 44 mg/ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30/70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg/ml) and iodine (5, 15, 25, 35, and 45 mg/ml). The x-ray transport process was simulated where the Beer-Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues

  3. Multivariate analysis of remote LIBS spectra using partial least squares, principal component analysis, and related techniques

    SciTech Connect

    Clegg, Samuel M; Barefield, James E; Wiens, Roger C; Sklute, Elizabeth; Dyare, Melinda D

    2008-01-01

    Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.

  4. Low PMEPR OFDM Radar Waveform Design Using the Iterative Least Squares Algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Tianyao; Zhao, Tong

    2015-11-01

    This letter considers waveform design of orthogonal frequency division multiplexing (OFDM) signal for radar applications, and aims at mitigating the envelope fluctuation in OFDM. A novel method is proposed to reduce the peak-to-mean envelope power ratio (PMEPR), which is commonly used to evaluate the fluctuation. The proposed method is based on the tone reservation approach, in which some bits or subcarriers of OFDM are allocated for decreasing PMEPR. We introduce the coefficient of variation of envelopes (CVE) as the cost function for waveform optimization, and develop an iterative least squares algorithm. Minimizing CVE leads to distinct PMEPR reduction, and it is guaranteed that the cost function monotonically decreases by applying the iterative algorithm. Simulations demonstrate that the envelope is significantly smoothed by the proposed method.

  5. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    NASA Technical Reports Server (NTRS)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  6. The Recovery of Weak Impulsive Signals Based on Stochastic Resonance and Moving Least Squares Fitting

    PubMed Central

    Jiang, Kuosheng.; Xu, Guanghua.; Liang, Lin.; Tao, Tangfei.; Gu, Fengshou.

    2014-01-01

    In this paper a stochastic resonance (SR)-based method for recovering weak impulsive signals is developed for quantitative diagnosis of faults in rotating machinery. It was shown in theory that weak impulsive signals follow the mechanism of SR, but the SR produces a nonlinear distortion of the shape of the impulsive signal. To eliminate the distortion a moving least squares fitting method is introduced to reconstruct the signal from the output of the SR process. This proposed method is verified by comparing its detection results with that of a morphological filter based on both simulated and experimental signals. The experimental results show that the background noise is suppressed effectively and the key features of impulsive signals are reconstructed with a good degree of accuracy, which leads to an accurate diagnosis of faults in roller bearings in a run-to failure test. PMID:25076220

  7. [Modelling a penicillin fed-batch fermentation using least squares support vector machines].

    PubMed

    Liu, Yi; Wang, Hai-Qing

    2006-01-01

    The biochemical processes are usually characterized as seriously time varying and nonlinear dynamic systems. Building their first-principle models are very costly and difficult due to the absence of inherent mechanism and efficient on-line sensors. Furthermore, these detailed and complicated models do not necessary guarantee a good performance in practice. An approach via least squares support vector machines (LS-SVM) based on Pensim simulator is proposed for modelling the penicillin fed-batch fermentation process, and the adjustment strategy for parameters of LS-SVM is presented. Based on the proposed modelling method, the predictive models of penicillin concentration, biomass concentration and substrate concentration are obtained by using very limited on-line measurements. The results show that the models established are more accurate and efficient, and suffice for the requirements of control and optimization for biochemical processes. PMID:16572855

  8. Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model

    NASA Astrophysics Data System (ADS)

    Yu, Lean; Wang, Shouyang; Lai, K. K.

    Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.

  9. Probability-based least square support vector regression metamodeling technique for crashworthiness optimization problems

    NASA Astrophysics Data System (ADS)

    Wang, Hu; Li, Enying; Li, G. Y.

    2011-03-01

    This paper presents a crashworthiness design optimization method based on a metamodeling technique. The crashworthiness optimization is a highly nonlinear and large scale problem, which is composed various nonlinearities, such as geometry, material and contact and needs a large number expensive evaluations. In order to obtain a robust approximation efficiently, a probability-based least square support vector regression is suggested to construct metamodels by considering structure risk minimization. Further, to save the computational cost, an intelligent sampling strategy is applied to generate sample points at the stage of design of experiment (DOE). In this paper, a cylinder, a full vehicle frontal collision is involved. The results demonstrate that the proposed metamodel-based optimization is efficient and effective in solving crashworthiness, design optimization problems.

  10. Phase aberration compensation of digital holographic microscopy based on least squares surface fitting

    NASA Astrophysics Data System (ADS)

    Di, Jianglei; Zhao, Jianlin; Sun, Weiwei; Jiang, Hongzhen; Yan, Xiaobo

    2009-10-01

    Digital holographic microscopy allows the numerical reconstruction of the complex wavefront of samples, especially biological samples such as living cells. In digital holographic microscopy, a microscope objective is introduced to improve the transverse resolution of the sample; however a phase aberration in the object wavefront is also brought along, which will affect the phase distribution of the reconstructed image. We propose here a numerical method to compensate for the phase aberration of thin transparent objects with a single hologram. The least squares surface fitting with points number less than the matrix of the original hologram is performed on the unwrapped phase distribution to remove the unwanted wavefront curvature. The proposed method is demonstrated with the samples of the cicada wings and epidermal cells of garlic, and the experimental results are consistent with that of the double exposure method.

  11. A least-squares finite element method for 3D incompressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, T. L.; Hou, Lin-Jun; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations, and results in symmetric, positive definite algebraic system. An additional compatibility equation, i.e., the divergence of vorticity vector should be zero, is included to make the first-order system elliptic. The Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. The flow in a half of 3D cubic cavity is calculated at Re = 100, 400, and 1,000 with 50 x 52 x 25 trilinear elements. The Taylor-Gortler-like vortices are observed at Re = 1,000.

  12. Simultaneous evaluation of interrelated cross sections by generalized least-squares and related data file requirements

    SciTech Connect

    Poenitz, W.P.

    1984-10-25

    Though several cross sections have been designated as standards, they are not basic units and are interrelated by ratio measurements. Moreover, as such interactions as /sup 6/Li + n and /sup 10/B + n involve only two and three cross sections respectively, total cross section data become useful for the evaluation process. The problem can be resolved by a simultaneous evaluation of the available absolute and shape data for cross sections, ratios, sums, and average cross sections by generalized least-squares. A data file is required for such evaluation which contains the originally measured quantities and their uncertainty components. Establishing such a file is a substantial task because data were frequently reported as absolute cross sections where ratios were measured without sufficient information on which reference cross section and which normalization were utilized. Reporting of uncertainties is often missing or incomplete. The requirements for data reporting will be discussed.

  13. Multidimensional least-squares resolution of Raman spectra from intermediates in sensitized photochemical reactions

    SciTech Connect

    Fister, J.C. III; Harris, J.M.

    1995-12-01

    Transient resonance Raman spectroscopy is used to elicit reaction kinetics and intermediate spectra from sensitized photochemical reactions. Nonlinear least-squares analysis of Raman spectra of a triplet-state photosensitizer (benzophenone), acquired as a function of laser intensity and/or quencher concentration allow the Raman spectra of the sensitizer excited state and intermediate photoproducts to be resolved from the spectra of the ground state and solvent. In cases where physical models describing the system kinetics cannot be found, factor analysis techniques are used to obtain the intermediate spectra. Raman spectra of triplet state benzophenone and acetophenone, obtained as a function of laser excitation kinetics, and the Raman spectra of intermediates formed by energy transfer (triplet-state biacetyl) and hydrogen abstraction (benzhydrol radical) are discussed.

  14. First-Order System Least Squares for the Stokes Equations, with Application to Linear Elasticity

    NASA Technical Reports Server (NTRS)

    Cai, Z.; Manteuffel, T. A.; McCormick, S. F.

    1996-01-01

    Following our earlier work on general second-order scalar equations, here we develop a least-squares functional for the two- and three-dimensional Stokes equations, generalized slightly by allowing a pressure term in the continuity equation. By introducing a velocity flux variable and associated curl and trace equations, we are able to establish ellipticity in an H(exp 1) product norm appropriately weighted by the Reynolds number. This immediately yields optimal discretization error estimates for finite element spaces in this norm and optimal algebraic convergence estimates for multiplicative and additive multigrid methods applied to the resulting discrete systems. Both estimates are uniform in the Reynolds number. Moreover, our pressure-perturbed form of the generalized Stokes equations allows us to develop an analogous result for the Dirichlet problem for linear elasticity with estimates that are uniform in the Lame constants.

  15. Slip distribution of the 2010 Mentawai earthquake from GPS observation using least squares inversion method

    NASA Astrophysics Data System (ADS)

    Awaluddin, Moehammad; Yuwono, Bambang Darmo; Puspita, Yolanda Adya

    2016-05-01

    Continuous Global Positioning System (GPS) observations showed significant crustal displacements as a result of the 2010 Mentawai earthquake. The Least Square Inversion method of Mentawai earthquake slip distribution from SuGAR observations yielded in an optimum value of slip distribution by giving a weight of smoothing constraint and a weight of slip value constraint = 0 at the edge of the earthquake rupture area. A maximum coseismic slip of the inversion calculation was 1.997 m and concentrated around stations PRKB (Pagai Island). In addition, the values of dip-slip direction tend to be more dominant. The seismic moment calculated from the slip distribution was 6.89 × 10E+20 Nm, which is equivalent to a magnitude of 7.8.

  16. Underwater terrain positioning method based on least squares estimation for AUV

    NASA Astrophysics Data System (ADS)

    Chen, Peng-yun; Li, Ye; Su, Yu-min; Chen, Xiao-long; Jiang, Yan-qing

    2015-12-01

    To achieve accurate positioning of autonomous underwater vehicles, an appropriate underwater terrain database storage format for underwater terrain-matching positioning is established using multi-beam data as underwater terrainmatching data. An underwater terrain interpolation error compensation method based on fractional Brownian motion is proposed for defects of normal terrain interpolation, and an underwater terrain-matching positioning method based on least squares estimation (LSE) is proposed for correlation analysis of topographic features. The Fisher method is introduced as a secondary criterion for pseudo localization appearing in a topographic features flat area, effectively reducing the impact of pseudo positioning points on matching accuracy and improving the positioning accuracy of terrain flat areas. Simulation experiments based on electronic chart and multi-beam sea trial data show that drift errors of an inertial navigation system can be corrected effectively using the proposed method. The positioning accuracy and practicality are high, satisfying the requirement of underwater accurate positioning.

  17. Online soft sensor of humidity in PEM fuel cell based on dynamic partial least squares.

    PubMed

    Long, Rong; Chen, Qihong; Zhang, Liyan; Ma, Longhua; Quan, Shuhai

    2013-01-01

    Online monitoring humidity in the proton exchange membrane (PEM) fuel cell is an important issue in maintaining proper membrane humidity. The cost and size of existing sensors for monitoring humidity are prohibitive for online measurements. Online prediction of humidity using readily available measured data would be beneficial to water management. In this paper, a novel soft sensor method based on dynamic partial least squares (DPLS) regression is proposed and applied to humidity prediction in PEM fuel cell. In order to obtain data of humidity and test the feasibility of the proposed DPLS-based soft sensor a hardware-in-the-loop (HIL) test system is constructed. The time lag of the DPLS-based soft sensor is selected as 30 by comparing the root-mean-square error in different time lag. The performance of the proposed DPLS-based soft sensor is demonstrated by experimental results. PMID:24453923

  18. Iterative least-squares fitting programs in pharmacokinetics for a programmable handheld calculator.

    PubMed

    Messori, A; Donati-Cori, G; Tendi, E

    1983-10-01

    Programs that perform a nonlinear least-squares fit to data conforming to one-compartment oral or two-compartment intravenous pharmacokinetic models are described. The programs are designed for use on a Hewlett-Packard HP-41 CV programmable calculator equipped with an extended-functions module and one or two extended-memory modules. Initial estimates of variables in the model are calculated by the method of residuals and then iteratively improved by the use of the Gauss-Newton algorithm as modified by Hartley. This modification minimizes convergence problems. The iterative-fitting procedure includes a routine for estimation of lag time for the one-compartment oral model. Clinical applications of the programs are illustrated using previously published data. Programming steps and user instructions are listed. The programs provide an efficient and inexpensive method of estimating pharmacokinetic variables. PMID:6688925

  19. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  20. Dehazing for single image with sky region via self-adaptive weighted least squares model

    NASA Astrophysics Data System (ADS)

    Wang, Dan; Zhu, Jubo; Yan, Fengxia

    2016-04-01

    The physical imaging model, which is based on atmospheric absorption and scattering, plays an important role in single-image dehazing. It is critical that the transmission is accurately estimated for the dehazing algorithm based on the physical imaging model. A self-adaptive weighted least squares (AWLS) model is proposed to refine the rough transmission, which is extracted by the dark channel (DC) model. In our model, the gray-world hypothesis and a smoothing technique with edge preservation are integrated to optimize the transmission and remove the artifacts which are brought by the DC model. The self-AWLS model has higher computational efficiency and can prevent the distortion of the recovered image better when the hazy image contains sky region, while many other dehazing techniques are not applicable for such images. Experimental results show that the proposed model is both effective and efficient for the haze removal application.

  1. Parameter identification of Jiles-Atherton model with nonlinear least-square method

    NASA Astrophysics Data System (ADS)

    Kis, Péter; Iványi, Amália

    2004-01-01

    A new method to the parameter identification of the widely used scalar Jiles-Atherton (J-A) model of hysteresis is detailed in this paper. The extended J-A model is also investigated including the eddy-current and the anomalous loss terms, which are taken into account by modeling the frequency dependence of the hysteresis. The five parameters of the classical J-A model can be determined from low-frequency hysteresis measurement. At higher frequency the effect of the eddy currents is not negligible, the J-A model must be extended. The loss of the hysteresis characteristics and the coercitive field are increasing with the frequency. Nonlinear least-squares method is used for parameter fitting of classical and extended J-A model, as well. The curve fitting is executed automatically based on the initial parameters and the measured data.

  2. Error Estimates Derived from the Data for Least-Squares Spline Fitting

    SciTech Connect

    Jerome Blair

    2007-06-25

    The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

  3. Online Soft Sensor of Humidity in PEM Fuel Cell Based on Dynamic Partial Least Squares

    PubMed Central

    Long, Rong; Chen, Qihong; Zhang, Liyan; Ma, Longhua; Quan, Shuhai

    2013-01-01

    Online monitoring humidity in the proton exchange membrane (PEM) fuel cell is an important issue in maintaining proper membrane humidity. The cost and size of existing sensors for monitoring humidity are prohibitive for online measurements. Online prediction of humidity using readily available measured data would be beneficial to water management. In this paper, a novel soft sensor method based on dynamic partial least squares (DPLS) regression is proposed and applied to humidity prediction in PEM fuel cell. In order to obtain data of humidity and test the feasibility of the proposed DPLS-based soft sensor a hardware-in-the-loop (HIL) test system is constructed. The time lag of the DPLS-based soft sensor is selected as 30 by comparing the root-mean-square error in different time lag. The performance of the proposed DPLS-based soft sensor is demonstrated by experimental results. PMID:24453923

  4. A library least-squares approach for scatter correction in gamma-ray tomography

    NASA Astrophysics Data System (ADS)

    Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro

    2015-03-01

    Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.

  5. Quantitative infrared spectroscopy of glucose in blood using partial least-squares analyses

    SciTech Connect

    Ward, K.J.; Haaland, D.M.; Robinson, M.R.; Eaton, R.P.

    1989-01-01

    The concentration of glucose in drawn samples of human blood has been determined using attenuated total reflectance (ATR) Fourier transform infrared (FT-IR) spectroscopy and partial least-squares (PLS) multivariate calibration. A twelve sample calibration set over the physiological glucose range of 50-400 mg/deciliter (dl) resulted in an average error of 5.2 mg/dl. These results were obtained using a cross validated PLS calibration over all infrared data in the frequency range of 950-1200 cm/sup /minus/1/. These results are a dramatic improvement relative to those obtained by previous studies of this system using univariate peak height analyses. 3 refs., 3 figs.

  6. The least-squares finite element method for low-mach-number compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Yu, Sheng-Tao

    1994-01-01

    The present paper reports the development of the Least-Squares Finite Element Method (LSFEM) for simulating compressible viscous flows at low Mach numbers in which the incompressible flows pose as an extreme. Conventional approach requires special treatments for low-speed flows calculations: finite difference and finite volume methods are based on the use of the staggered grid or the preconditioning technique; and, finite element methods rely on the mixed method and the operator-splitting method. In this paper, however, we show that such difficulty does not exist for the LSFEM and no special treatment is needed. The LSFEM always leads to a symmetric, positive-definite matrix through which the compressible flow equations can be effectively solved. Two numerical examples are included to demonstrate the method: first, driven cavity flows at various Reynolds numbers; and, buoyancy-driven flows with significant density variation. Both examples are calculated by using full compressible flow equations.

  7. Improvement of high-order least-squares integration method for stereo deflectometry.

    PubMed

    Ren, Hongyu; Gao, Feng; Jiang, Xiangqian

    2015-12-01

    Stereo deflectometry is defined as measurement of the local slope of specular surfaces by using two CCD cameras as detectors and one LCD screen as a light source. For obtaining 3D topography, integrating the calculated slope data is needed. Currently, a high-order finite-difference-based least-squares integration (HFLI) method is used to improve the integration accuracy. However, this method cannot be easily implemented in circular domain or when gradient data are incomplete. This paper proposes a modified easy-implementation integration method based on HFLI (EI-HFLI), which can work in arbitrary domains, and can directly and conveniently handle incomplete gradient data. To carry out the proposed algorithm in a practical stereo deflectometry measurement, gradients are calculated in both CCD frames, and then are mixed together as original data to be meshed into rectangular grids format. Simulation and experiments show this modified method is feasible and can work efficiently. PMID:26836684

  8. Comparison of SIRT and SQS for Regularized Weighted Least Squares Image Reconstruction

    PubMed Central

    Gregor, Jens; Fessler, Jeffrey A.

    2015-01-01

    Tomographic image reconstruction is often formulated as a regularized weighted least squares (RWLS) problem optimized by iterative algorithms that are either inherently algebraic or derived from a statistical point of view. This paper compares a modified version of SIRT (Simultaneous Iterative Reconstruction Technique), which is of the former type, with a version of SQS (Separable Quadratic Surrogates), which is of the latter type. We show that the two algorithms minimize the same criterion function using similar forms of preconditioned gradient descent. We present near-optimal relaxation for both based on eigenvalue bounds and include a heuristic extension for use with ordered subsets. We provide empirical evidence that SIRT and SQS converge at the same rate for all intents and purposes. For context, we compare their performance with an implementation of preconditioned conjugate gradient. The illustrative application is X-ray CT of luggage for aviation security. PMID:26478906

  9. The use of least squares methods in functional optimization of energy use prediction models

    NASA Astrophysics Data System (ADS)

    Bourisli, Raed I.; Al-Shammeri, Basma S.; AlAnzi, Adnan A.

    2012-06-01

    The least squares method (LSM) is used to optimize the coefficients of a closed-form correlation that predicts the annual energy use of buildings based on key envelope design and thermal parameters. Specifically, annual energy use is related to a number parameters like the overall heat transfer coefficients of the wall, roof and glazing, glazing percentage, and building surface area. The building used as a case study is a previously energy-audited mosque in a suburb of Kuwait City, Kuwait. Energy audit results are used to fine-tune the base case mosque model in the VisualDOE{trade mark, serif} software. Subsequently, 1625 different cases of mosques with varying parameters were developed and simulated in order to provide the training data sets for the LSM optimizer. Coefficients of the proposed correlation are then optimized using multivariate least squares analysis. The objective is to minimize the difference between the correlation-predicted results and the VisualDOE-simulation results. It was found that the resulting correlation is able to come up with coefficients for the proposed correlation that reduce the difference between the simulated and predicted results to about 0.81%. In terms of the effects of the various parameters, the newly-defined weighted surface area parameter was found to have the greatest effect on the normalized annual energy use. Insulating the roofs and walls also had a major effect on the building energy use. The proposed correlation and methodology can be used during preliminary design stages to inexpensively assess the impacts of various design variables on the expected energy use. On the other hand, the method can also be used by municipality officials and planners as a tool for recommending energy conservation measures and fine-tuning energy codes.

  10. On recursive least-squares filtering algorithms and implementations. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hsieh, Shih-Fu

    1990-01-01

    In many real-time signal processing applications, fast and numerically stable algorithms for solving least-squares problems are necessary and important. In particular, under non-stationary conditions, these algorithms must be able to adapt themselves to reflect the changes in the system and take appropriate adjustments to achieve optimum performances. Among existing algorithms, the QR-decomposition (QRD)-based recursive least-squares (RLS) methods have been shown to be useful and effective for adaptive signal processing. In order to increase the speed of processing and achieve high throughput rate, many algorithms are being vectorized and/or pipelined to facilitate high degrees of parallelism. A time-recursive formulation of RLS filtering employing block QRD will be considered first. Several methods, including a new non-continuous windowing scheme based on selectively rejecting contaminated data, were investigated for adaptive processing. Based on systolic triarrays, many other forms of systolic arrays are shown to be capable of implementing different algorithms. Various updating and downdating systolic algorithms and architectures for RLS filtering are examined and compared in details, which include Householder reflector, Gram-Schmidt procedure, and Givens rotation. A unified approach encompassing existing square-root-free algorithms is also proposed. For the sinusoidal spectrum estimation problem, a judicious method of separating the noise from the signal is of great interest. Various truncated QR methods are proposed for this purpose and compared to the truncated SVD method. Computer simulations provided for detailed comparisons show the effectiveness of these methods. This thesis deals with fundamental issues of numerical stability, computational efficiency, adaptivity, and VLSI implementation for the RLS filtering problems. In all, various new and modified algorithms and architectures are proposed and analyzed; the significance of any of the new method depends

  11. Lameness detection challenges in automated milking systems addressed with partial least squares discriminant analysis.

    PubMed

    Garcia, E; Klaas, I; Amigo, J M; Bro, R; Enevoldsen, C

    2014-12-01

    Lameness causes decreased animal welfare and leads to higher production costs. This study explored data from an automatic milking system (AMS) to model on-farm gait scoring from a commercial farm. A total of 88 cows were gait scored once per week, for 2 5-wk periods. Eighty variables retrieved from AMS were summarized week-wise and used to predict 2 defined classes: nonlame and clinically lame cows. Variables were represented with 2 transformations of the week summarized variables, using 2-wk data blocks before gait scoring, totaling 320 variables (2 × 2 × 80). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3 or 4/4) or not lame (score 1/4). Both models achieved sensitivity and specificity values around 80%, both in calibration and cross-validation. At the optimum values in the receiver operating characteristic curve, the false-positive rate was 28% in the parity 1 model, whereas in the parity 2 model it was about half (16%), which makes it more suitable for practical application; the model error rates were, 23 and 19%, respectively. Based on data registered automatically from one AMS farm, we were able to discriminate nonlame and lame cows, where partial least squares discriminant analysis achieved similar performance to the reference method. PMID:25282423

  12. Discrete variable representation in electronic structure theory: quadrature grids for least-squares tensor hypercontraction.

    PubMed

    Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David

    2013-05-21

    We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes. PMID:23697409

  13. Discrete variable representation in electronic structure theory: Quadrature grids for least-squares tensor hypercontraction

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David

    2013-05-01

    We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes.

  14. [A hyperspectral subpixel target detection method based on inverse least squares method].

    PubMed

    Li, Qing-Bo; Nie, Xin; Zhang, Guang-Jun

    2009-01-01

    In the present paper, an inverse least square (ILS) method combined with the Mahalanobis distance outlier detection method is discussed to detect the subpixel target from the hyperspectral image. Firstly, the inverse model for the target spectrum and all the pixel spectra was established, in which the accurate target spectrum was obtained previously, and then the SNV algorithm was employed to preprocess each original pixel spectra separately. After the pretreatment, the regressive coefficient of ILS was calculated with partial least square (PLS) algorithm. Each point in the vector of regressive coefficient corresponds to a pixel in the image. The Mahalanobis distance was calculated with each point in the regressive coefficient vector. Because Mahalanobis distance stands for the extent to which samples deviate from the total population, the point with Mahalanobis distance larger than the 3sigma was regarded as the subpixel target. In this algorithm, no other prior information such as representative background spectrum or modeling of background is required, and only the target spectrum is needed. In addition, the result of the detection is insensitive to the complexity of background. This method was applied to AVIRIS remote sensing data. For this simulation experiment, AVIRIS remote sensing data was free downloaded from the NASA official websit, the spectrum of a ground object in the AVIRIS hyperspectral image was picked up as the target spectrum, and the subpixel target was simulated though a linear mixed method. The comparison of the subpixel detection result of the method mentioned above with that of orthogonal subspace projection method (OSP) was performed. The result shows that the performance of the ILS method is better than the traditional OSP method. The ROC (receive operating characteristic curve) and SNR were calculated, which indicates that the ILS method possesses higher detection accuracy and less computing time than the OSP algorithm. PMID:19385196

  15. Iterative weighting of multiblock data in the orthogonal partial least squares framework.

    PubMed

    Boccard, Julien; Rutledge, Douglas N

    2014-02-27

    The integration of multiple data sources has emerged as a pivotal aspect to assess complex systems comprehensively. This new paradigm requires the ability to separate common and redundant from specific and complementary information during the joint analysis of several data blocks. However, inherent problems encountered when analysing single tables are amplified with the generation of multiblock datasets. Finding the relationships between data layers of increasing complexity constitutes therefore a challenging task. In the present work, an algorithm is proposed for the supervised analysis of multiblock data structures. It associates the advantages of interpretability from the orthogonal partial least squares (OPLS) framework and the ability of common component and specific weights analysis (CCSWA) to weight each data table individually in order to grasp its specificities and handle efficiently the different sources of Y-orthogonal variation. Three applications are proposed for illustration purposes. A first example refers to a quantitative structure-activity relationship study aiming to predict the binding affinity of flavonoids toward the P-glycoprotein based on physicochemical properties. A second application concerns the integration of several groups of sensory attributes for overall quality assessment of a series of red wines. A third case study highlights the ability of the method to combine very large heterogeneous data blocks from Omics experiments in systems biology. Results were compared to the reference multiblock partial least squares (MBPLS) method to assess the performance of the proposed algorithm in terms of predictive ability and model interpretability. In all cases, ComDim-OPLS was demonstrated as a relevant data mining strategy for the simultaneous analysis of multiblock structures by accounting for specific variation sources in each dataset and providing a balance between predictive and descriptive purpose. PMID:24528656

  16. Fundamental solution of Laplace's equation in oblate spheroidal coordinates and Galerkin's matrix for Neumann's problem in Earth's gravity field studies

    NASA Astrophysics Data System (ADS)

    Holota, Petr; Nesvadba, Otakar

    2015-04-01

    In this paper the reciprocal distance is used for generating Galerkin's approximations in the weak solution of Neumann's problem that has an important role in Earth's gravity field studies. The reciprocal distance has a natural tie to the fundamental solution of Laplace's partial differential equation and in the paper it is represented by means of an expansion into a series of oblate spheroidal harmonics. Subsequently, the gradient vector of the reciprocal distance is constructed. In the computation of its components the expansion mentioned above is employed. The paper then focuses on the scalar product of reciprocal distance gradients in two different points and in particular on a series representation of a volume integral of the scalar product spread over an unbounded domain given by the exterior of an oblate spheroid (oblate ellipsoid of revolution). The integral yields the entries of Galerkin's matrix. The numerical interpretation of all the expansions used as well as the respective software implementation within the OpenCL framework is treated, which concerns also a numerical evaluation of Legendre functions of a real and an imaginary argument. In parallel an approximate closed formula expressing the entries of Galerkin's matrix (with an accuracy up to terms multiplied by the square of numerical eccentricity) is derived for convenience and comparison. The paper is added extensive numerical examples that illustrate the approach applied and demonstrate the accuracy of the derived formulas. Aspects related to practical applications are discussed.

  17. Smooth Particle Hydrodynamics with nonlinear Moving-Least-Squares WENO reconstruction to model anisotropic dispersion in porous media

    NASA Astrophysics Data System (ADS)

    Avesani, Diego; Herrera, Paulo; Chiogna, Gabriele; Bellin, Alberto; Dumbser, Michael

    2015-06-01

    Most numerical schemes applied to solve the advection-diffusion equation are affected by numerical diffusion. Moreover, unphysical results, such as oscillations and negative concentrations, may emerge when an anisotropic dispersion tensor is used, which induces even more severe errors in the solution of multispecies reactive transport. To cope with this long standing problem we propose a modified version of the standard Smoothed Particle Hydrodynamics (SPH) method based on a Moving-Least-Squares-Weighted-Essentially-Non-Oscillatory (MLS-WENO) reconstruction of concentrations. This scheme formulation (called MWSPH) approximates the diffusive fluxes with a Rusanov-type Riemann solver based on high order WENO scheme. We compare the standard SPH with the MWSPH for different a few test cases, considering both homogeneous and heterogeneous flow fields and different anisotropic ratios of the dispersion tensor. We show that, MWSPH is stable and accurate and that it reduces the occurrence of negative concentrations compared to standard SPH. When negative concentrations are observed, their absolute values are several orders of magnitude smaller compared to standard SPH. In addition, MWSPH limits spurious oscillations in the numerical solution more effectively than classical SPH. Convergence analysis shows that MWSPH is computationally more demanding than SPH, but with the payoff a more accurate solution, which in addition is less sensitive to particles position. The latter property simplifies the time consuming and often user dependent procedure to define the initial dislocation of the particles.

  18. Multi-Window Classical Least Squares Multivariate Calibration Methods for Quantitative ICP-AES Analyses

    SciTech Connect

    CHAMBERS,WILLIAM B.; HAALAND,DAVID M.; KEENAN,MICHAEL R.; MELGAARD,DAVID K.

    1999-10-01

    The advent of inductively coupled plasma-atomic emission spectrometers (ICP-AES) equipped with charge-coupled-device (CCD) detector arrays allows the application of multivariate calibration methods to the quantitative analysis of spectral data. We have applied classical least squares (CLS) methods to the analysis of a variety of samples containing up to 12 elements plus an internal standard. The elements included in the calibration models were Ag, Al, As, Au, Cd, Cr, Cu, Fe, Ni, Pb, Pd, and Se. By performing the CLS analysis separately in each of 46 spectral windows and by pooling the CLS concentration results for each element in all windows in a statistically efficient manner, we have been able to significantly improve the accuracy and precision of the ICP-AES analyses relative to the univariate and single-window multivariate methods supplied with the spectrometer. This new multi-window CLS (MWCLS) approach simplifies the analyses by providing a single concentration determination for each element from all spectral windows. Thus, the analyst does not have to perform the tedious task of reviewing the results from each window in an attempt to decide the correct value among discrepant analyses in one or more windows for each element. Furthermore, it is not necessary to construct a spectral correction model for each window prior to calibration and analysis: When one or more interfering elements was present, the new MWCLS method was able to reduce prediction errors for a selected analyte by more than 2 orders of magnitude compared to the worst case single-window multivariate and univariate predictions. The MWCLS detection limits in the presence of multiple interferences are 15 rig/g (i.e., 15 ppb) or better for each element. In addition, errors with the new method are only slightly inflated when only a single target element is included in the calibration (i.e., knowledge of all other elements is excluded during calibration). The MWCLS method is found to be vastly

  19. [Main Components of Xinjiang Lavender Essential Oil Determined by Partial Least Squares and Near Infrared Spectroscopy].

    PubMed

    Liao, Xiang; Wang, Qing; Fu, Ji-hong; Tang, Jun

    2015-09-01

    This work was undertaken to establish a quantitative analysis model which can rapid determinate the content of linalool, linalyl acetate of Xinjiang lavender essential oil. Totally 165 lavender essential oil samples were measured by using near infrared absorption spectrum (NIR), after analyzing the near infrared spectral absorption peaks of all samples, lavender essential oil have abundant chemical information and the interference of random noise may be relatively low on the spectral intervals of 7100~4500 cm(-1). Thus, the PLS models was constructed by using this interval for further analysis. 8 abnormal samples were eliminated. Through the clustering method, 157 lavender essential oil samples were divided into 105 calibration set samples and 52 validation set samples. Gas chromatography mass spectrometry (GC-MS) was used as a tool to determine the content of linalool and linalyl acetate in lavender essential oil. Then the matrix was established with the GC-MS raw data of two compounds in combination with the original NIR data. In order to optimize the model, different pretreatment methods were used to preprocess the raw NIR spectral to contrast the spectral filtering effect, after analysizing the quantitative model results of linalool and linalyl acetate, the root mean square error prediction (RMSEP) of orthogonal signal transformation (OSC) was 0.226, 0.558, spectrally, it was the optimum pretreatment method. In addition, forward interval partial least squares (FiPLS) method was used to exclude the wavelength points which has nothing to do with determination composition or present nonlinear correlation, finally 8 spectral intervals totally 160 wavelength points were obtained as the dataset. Combining the data sets which have optimized by OSC-FiPLS with partial least squares (PLS) to establish a rapid quantitative analysis model for determining the content of linalool and linalyl acetate in Xinjiang lavender essential oil, numbers of hidden variables of two

  20. Correspondence and Least Squares Analyses of Soil and Rock Compositions for the Viking Lander 1 and Pathfinder Sites

    NASA Technical Reports Server (NTRS)

    Larsen, K. W.; Arvidson, R. E.; Jolliff, B. L.; Clark, B. C.

    2000-01-01

    Correspondence and Least Squares Mixing Analysis techniques are applied to the chemical composition of Viking 1 soils and Pathfinder rocks and soils. Implications for the parent composition of local and global materials are discussed.

  1. Application of Partial Least Squares (PLS) Regression to Determine Landscape-Scale Aquatic Resource Vulnerability in the Ozark Mountains

    EPA Science Inventory

    Partial least squares (PLS) analysis offers a number of advantages over the more traditionally used regression analyses applied in landscape ecology to study the associations among constituents of surface water and landscapes. Common data problems in ecological studies include: s...

  2. Application of Partial Least Square (PLS) Regression to Determine Landscape-Scale Aquatic Resources Vulnerability in the Ozark Mountains

    EPA Science Inventory

    Partial least squares (PLS) analysis offers a number of advantages over the more traditionally used regression analyses applied in landscape ecology, particularly for determining the associations among multiple constituents of surface water and landscape configuration. Common dat...

  3. An error analysis of least-squares finite element method of velocity-pressure-vorticity formulation for Stokes problem

    NASA Technical Reports Server (NTRS)

    Chang, Ching L.; Jiang, Bo-Nan

    1990-01-01

    A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.

  4. Three-Dimensional Simulations of Marangoni-Benard Convection in Small Containers by the Least-Squares Finite Element Method

    NASA Technical Reports Server (NTRS)

    Yu, Sheng-Tao; Jiang, Bo-Nan; Wu, Jie; Duh, J. C.

    1996-01-01

    This paper reports a numerical study of the Marangoni-Benard (MB) convection in a planar fluid layer. The least-squares finite element method (LSFEM) is employed to solve the three-dimensional Stokes equations and the energy equation. First, the governing equations are reduced to be first-order by introducing variables such as vorticity and heat fluxes. The resultant first-order system is then cast into a div-curl-grad formulation, and its ellipticity and permissible boundary conditions are readily proved. This numerical approach provides an equal-order discretization for velocity, pressure, vorticity, temperature, and heat conduction fluxes, and therefore can provide high fidelity solutions for the complex flow physics of the MB convection. Numerical results reported include the critical Marangoni numbers (M(sub ac)) for the onset of the convection in containers with various aspect ratios, and the planforms of supercritical MB flows. The numerical solutions compared favorably with the experimental results reported by Koschmieder et al..

  5. Parameterized least-squares attitude history estimation and magnetic field observations of the auroral spatial structures probe

    NASA Astrophysics Data System (ADS)

    Martineau, Ryan J.

    Terrestrial auroras are visible-light events caused by charged particles trapped by the Earth's magnetic field precipitating into the atmosphere along magnetic field lines near the poles. Auroral events are very dynamic, changing rapidly in time and across large spatial scales. Better knowledge of the flow of energy during an aurora will improve understanding of the heating processes in the atmosphere during geomagnetic and solar storms. The Auroral Spatial Structures Probe is a sounding rocket campaign to observe the middle-atmosphere plasma and electromagnetic environment during an auroral event with multipoint simultaneous measurements for fine temporal and spatial resolution. The auroral event in question occurred on January 28, 2015, with liftoff of the rocket at 10:41:01 UTC. The goal of this thesis is to produce clear observations of the magnetic field that may be used to model the current systems of the auroral event. To achieve this, the attitude of ASSP's 7 independent payloads must be estimated, and a new attitude determination method is attempted. The new solution uses nonlinear least-squares parameter estimation with a rigid-body dynamics simulation to determine attitude with an estimated accuracy of a few degrees. Observed magnetic field perturbations found using the new attitude solution are presented, where structures of the perturbations are consistent with previous observations and electromagnetic theory.

  6. Modifying constrained least-squares restoration for application to single photon emission computed tomography projection images

    SciTech Connect

    Penney, B.C.; King, M.A.; Schwinger, R.B.; Baker, S.P.; Doherty, P.W.

    1988-05-01

    Image restoration methods have been shown to increase the contrast of nuclear medicine images by decreasing the effects of scatter and septal penetration. Image restoration can also reduce the high-frequency noise in the image. This study applies constrained least-squares (CLS) restoration to the projection images of single photon emission computed tomography (SPECT). In a previous study, it was noted that CLS restoration has the potential advantage of automatically adapting to the blurred object. This potential is confirmed using planar images. CLS restoration is then modified to improve its performance when applied to SPECT projection image sets. The modification was necessary because the Poisson noise in low count SPECT images causes considerable variation in the CLS filter. On phantom studies, count-dependent Metz restoration was slightly better than the modified CLS restoration method, according to measures of contrast and noise. However, CLS restoration was generally judged as yielding the best results when applied to clinical studies, apparently because of its ability to adapt to the image being restored.

  7. Realizations and performances of least-squares estimation and Kalman filtering by systolic arrays

    SciTech Connect

    Chen, M.J.

    1987-01-01

    Fast least-squares (LS) estimation and Kalman-filtering algorithms utilizing systolic-array implementation are studied. Based on a generalized systolic QR algorithm, a modified LS method is proposed and shown to have superior computational and inter-cell connection complexities, and is more practical for systolic-array implementation. After whitening processing, the Kalman filter can be formulated as a SRIF data-processing problem followed by a simple LS operation. This approach simplifies the computational structure, and is more reliable when the system has singular or near singular coefficient matrix. To improve the throughput rate of the systolic Kalman filter, a topology for stripe QR processing is also proposed. By skewing the order of input matrices, a fully pipelined systolic Kalman-filtering operation can be achieved. With the number of processing units of the O(n/sup 2/), the system throughput rate becomes of the O(n). The numerical properties of the systolic LS estimation and the Kalman filtering algorithms under finite word-length effect are studied via analysis and computer simulations, and are compared with that of conventional approaches. Fault tolerance of the LS estimation algorithm is also discussed. It is shown that by using a simple bypass register, reasonable estimation performance is still possible for a transient defective processing unit.

  8. Least-squares reverse-time migration with cost-effective computation and memory storage

    NASA Astrophysics Data System (ADS)

    Liu, Xuejian; Liu, Yike; Huang, Xiaogang; Li, Peng

    2016-06-01

    Least-squares reverse-time migration (LSRTM), which involves several iterations of reverse-time migration (RTM) and Born modeling procedures, can provide subsurface images with better balanced amplitudes, higher resolution and fewer artifacts than standard migration. However, the same source wavefield is repetitively computed during the Born modeling and RTM procedures of different iterations. We developed a new LSRTM method with modified excitation-amplitude imaging conditions, where the source wavefield for RTM is forward propagated only once while the maximum amplitude and its excitation-time at each grid are stored. Then, the RTM procedure of different iterations only involves: (1) backward propagation of the residual between Born modeled and acquired data, and (2) implementation of the modified excitation-amplitude imaging condition by multiplying the maximum amplitude by the back propagated data residuals only at the grids that satisfy the imaging time at each time-step. For a complex model, 2 or 3 local peak-amplitudes and corresponding traveltimes should be confirmed and stored for all the grids so that multiarrival information of the source wavefield can be utilized for imaging. Numerical experiments on a three-layer and the Marmousi2 model demonstrate that the proposed LSRTM method saves huge computation and memory cost.

  9. Texture discrimination of green tea categories based on least squares support vector machine (LSSVM) classifier

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; He, Yong; Qiu, Zhengjun; Wu, Di

    2008-03-01

    This research aimed for development multi-spectral imaging technique for green tea categories discrimination based on texture analysis. Three key wavelengths of 550, 650 and 800 nm were implemented in a common-aperture multi-spectral charged coupled device camera, and images were acquired for 190 unique images in a four different kinds of green tea data set. An image data set consisting of 15 texture features for each image was generated based on texture analysis techniques including grey level co-occurrence method (GLCM) and texture filtering. For optimization the texture features, 5 features that weren't correlated with the category of tea were eliminated. Unsupervised cluster analysis was conducted using the optimized texture features based on principal component analysis. The cluster analysis showed that the four kinds of green tea could be separated in the first two principal components space, however there was overlapping phenomenon among the different kinds of green tea. To enhance the performance of discrimination, least squares support vector machine (LSSVM) classifier was developed based on the optimized texture features. The excellent discrimination performance for sample in prediction set was obtained with 100%, 100%, 75% and 100% for four kinds of green tea respectively. It can be concluded that texture discrimination of green tea categories based on multi-spectral image technology is feasible.

  10. Aerosol and trace gas profile retrievals from MAX-DOAS observations using simple least squares methods

    NASA Astrophysics Data System (ADS)

    Wagner, Thomas; Beirles, Steffen; Shaiganfar, Reza

    2010-05-01

    Multi-AXis (MAX-) DOAS observations have become a widely used technique for the retrieval of atmospheric profiles of trace gases and aerosols. Since the information content of MAX-DOAS observations is limited, usually optimal estimation techniques are used for profile inversion, and a-priori assumptions are needed. In contrast, in our retrieval we limit the retrieved parameter to few basic profile parameters (e.g. profile shape and integrated column density), which are retrieved without further a-priori assumptions. The retrieval is instead based on simple least squares methods. Despite the simple retrieval scheme, our method has the advantage that it is very robust and stable. It also yields the most important parameters with good accuracy (e.g. total aerosol optical depth, total tropospheric trace gas column density, surface aerosol extinction, surface trace gas mixing ratio). Some of these parameters can even be retrieved for cloudy conditions. We present MAX-DOAS results from two measurement campaigns: The CINDI campaign in Cabauw, The Netherlands, in 2009 and the FORMAT campaign in Milano, Italy, in 2003. Results for aerosols, NO2, and HCHO, are presented and compared to independent measurements.

  11. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  12. Lossless compression of hyperspectral images using conventional recursive least-squares predictor with adaptive prediction bands

    NASA Astrophysics Data System (ADS)

    Gao, Fang; Guo, Shuxu

    2016-01-01

    An efficient lossless compression scheme for hyperspectral images using conventional recursive least-squares (CRLS) predictor with adaptive prediction bands is proposed. The proposed scheme first calculates the preliminary estimates to form the input vector of the CRLS predictor. Then the number of bands used in prediction is adaptively selected by an exhaustive search for the number that minimizes the prediction residual. Finally, after prediction, the prediction residuals are sent to an adaptive arithmetic coder. Experiments on the newer airborne visible/infrared imaging spectrometer (AVIRIS) images in the consultative committee for space data systems (CCSDS) test set show that the proposed scheme yields an average compression performance of 3.29 (bits/pixel), 5.57 (bits/pixel), and 2.44 (bits/pixel) on the 16-bit calibrated images, the 16-bit uncalibrated images, and the 12-bit uncalibrated images, respectively. Experimental results demonstrate that the proposed scheme obtains compression results very close to clustered differential pulse code modulation-with-adaptive-prediction-length, which achieves best lossless compression performance for AVIRIS images in the CCSDS test set, and outperforms other current state-of-the-art schemes with relatively low computation complexity.

  13. Weighted partial least squares method to improve calibration precision for spectroscopic noise-limited data

    SciTech Connect

    Haaland, D.M.; Jones, H.D.T.

    1997-09-01

    Multivariate calibration methods have been applied extensively to the quantitative analysis of Fourier transform infrared (FT-IR) spectral data. Partial least squares (PLS) methods have become the most widely used multivariate method for quantitative spectroscopic analyses. Most often these methods are limited by model error or the accuracy or precision of the reference methods. However, in some cases, the precision of the quantitative analysis is limited by the noise in the spectroscopic signal. In these situations, the precision of the PLS calibrations and predictions can be improved by the incorporation of weighting in the PLS algorithm. If the spectral noise of the system is known (e.g., in the case of detector-noise-limited cases), then appropriate weighting can be incorporated into the multivariate spectral calibrations and predictions. A weighted PLS (WPLS) algorithm was developed to improve the precision of the analysis in the case of spectral-noise-limited data. This new PLS algorithm was then tested with real and simulated data, and the results compared with the unweighted PLS algorithm. Using near-infrared (NIR) calibration precision when the WPLS algorithm was applied. The best WPLS method improved prediction precision for the analysis of one of the minor components by a factor of nearly 9 relative to the unweighted PLS algorithm.

  14. Time-dependent speciation and extinction from phylogenies: a least squares approach.

    PubMed

    Paradis, Emmanuel

    2011-03-01

    Molecular phylogenies contribute to the study of the patterns and processes of macroevolution even though past events (fossils) are not recorded in these data. In this article, I consider the general time-dependent birth-death model to fit any model of temporal variation in speciation and extinction to phylogenies. I establish formulae to compute the expected cumulative distribution function of branching times for any model, and, building on previous published works, I derive maximum likelihood estimators. Some limitations of the likelihood approach are described, and a fitting procedure based on least squares is developed that alleviates the shortcomings of maximum likelihood in the present context. Parametric and nonparametric bootstrap procedures are developed to assess uncertainty in the parameter estimates, the latter version giving narrower confidence intervals and being faster to compute. I also present several general algorithms of tree simulation in continuous time. I illustrate the application of this approach with the analysis of simulated datasets, and two published phylogenies of primates (Catarrhinae) and lizards (Agamidae). PMID:21054360

  15. Prediction of olive oil sensory descriptors using instrumental data fusion and partial least squares (PLS) regression.

    PubMed

    Borràs, Eva; Ferré, Joan; Boqué, Ricard; Mestres, Montserrat; Aceña, Laura; Calvo, Angels; Busto, Olga

    2016-08-01

    Headspace-Mass Spectrometry (HS-MS), Fourier Transform Mid-Infrared spectroscopy (FT-MIR) and UV-Visible spectrophotometry (UV-vis) instrumental responses have been combined to predict virgin olive oil sensory descriptors. 343 olive oil samples analyzed during four consecutive harvests (2010-2014) were used to build multivariate calibration models using partial least squares (PLS) regression. The reference values of the sensory attributes were provided by expert assessors from an official taste panel. The instrumental data were modeled individually and also using data fusion approaches. The use of fused data with both low- and mid-level of abstraction improved PLS predictions for all the olive oil descriptors. The best PLS models were obtained for two positive attributes (fruity and bitter) and two defective descriptors (fusty and musty), all of them using data fusion of MS and MIR spectral fingerprints. Although good predictions were not obtained for some sensory descriptors, the results are encouraging, specially considering that the legal categorization of virgin olive oils only requires the determination of fruity and defective descriptors. PMID:27216664

  16. Pole coordinates data prediction by combination of least squares extrapolation and double autoregressive prediction

    NASA Astrophysics Data System (ADS)

    Kosek, Wieslaw

    2016-04-01

    Future Earth Orientation Parameters data are needed to compute real time transformation between the celestial and terrestrial reference frames. This transformation is realized by predictions of x, y pole coordinates data, UT1-UTC data and precesion-nutation extrapolation model. This paper is focused on the pole coordinates data prediction by combination of the least-squares (LS) extrapolation and autoregressive (AR) prediction models (LS+AR). The AR prediction which is applied to the LS extrapolation residuals of pole coordinates data does not able to predict all frequency bands of them and it is mostly tuned to predict subseasonal oscillations. The absolute values of differences between pole coordinates data and their LS+AR predictions increase with prediction length and depend mostly on starting prediction epochs, thus time series of these differences for 2, 4 and 8 weeks in the future were analyzed. Time frequency spectra of these differences for different prediction lengths are very similar showing some power in the frequency band corresponding to the prograde Chandler and annual oscillations, which means that the increase of prediction errors is caused by mismodelling of these oscillations by the LS extrapolation model. Thus, the LS+AR prediction method can be modified by taking into additional AR prediction correction computed from time series of these prediction differences for different prediction lengths. This additional AR prediction is mostly tuned to the seasonal frequency band of pole coordinates data.

  17. Automatic retinal vessel classification using a Least Square-Support Vector Machine in VAMPIRE.

    PubMed

    Relan, D; MacGillivray, T; Ballerini, L; Trucco, E

    2014-01-01

    It is important to classify retinal blood vessels into arterioles and venules for computerised analysis of the vasculature and to aid discovery of disease biomarkers. For instance, zone B is the standardised region of a retinal image utilised for the measurement of the arteriole to venule width ratio (AVR), a parameter indicative of microvascular health and systemic disease. We introduce a Least Square-Support Vector Machine (LS-SVM) classifier for the first time (to the best of our knowledge) to label automatically arterioles and venules. We use only 4 image features and consider vessels inside zone B (802 vessels from 70 fundus camera images) and in an extended zone (1,207 vessels, 70 fundus camera images). We achieve an accuracy of 94.88% and 93.96% in zone B and the extended zone, respectively, with a training set of 10 images and a testing set of 60 images. With a smaller training set of only 5 images and the same testing set we achieve an accuracy of 94.16% and 93.95%, respectively. This experiment was repeated five times by randomly choosing 10 and 5 images for the training set. Mean classification accuracy are close to the above mentioned result. We conclude that the performance of our system is very promising and outperforms most recently reported systems. Our approach requires smaller training data sets compared to others but still results in a similar or higher classification rate. PMID:25569917

  18. Simultaneous backscatter and attenuation estimation using a least squares method with constraints.

    PubMed

    Nam, Kibo; Zagzebski, James A; Hall, Timothy J

    2011-12-01

    Backscatter and attenuation variations are essential contrast mechanisms in ultrasound B-mode imaging. Emerging quantitative ultrasound methods extract and display absolute values of these tissue properties. However, in clinical applications, backscatter and attenuation parameters sometimes are not easily measured because of tissues inhomogeneities above the region-of-interest (ROI). We describe a least squares method (LSM) that fits the echo signal power spectra from a ROI to a three-parameter tissue model that simultaneously yields estimates of attenuation losses and backscatter coefficients. To test the method, tissue-mimicking phantoms with backscatter and attenuation contrast as well as uniform phantoms were scanned with linear array transducers on a Siemens S2000. Attenuation and backscatter coefficients estimated by the LSM were compared with those derived using a reference phantom method (Yao et al. 1990). Results show that the LSM yields effective attenuation coefficients for uniform phantoms comparable to values derived using the reference phantom method. For layered phantoms exhibiting nonuniform backscatter, the LSM resulted in smaller attenuation estimation errors than the reference phantom method. Backscatter coefficients derived using the LSM were in excellent agreement with values obtained from laboratory measurements on test samples and with theory. The LSM is more immune to depth-dependent backscatter changes than commonly used reference phantom methods. PMID:21963038

  19. Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization

    PubMed Central

    Tarrío, Paula; Bernardos, Ana M.; Casar, José R.

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092

  20. Semiparametric regression of multidimensional genetic pathway data: least-squares kernel machines and linear mixed models.

    PubMed

    Liu, Dawei; Lin, Xihong; Ghosh, Debashis

    2007-12-01

    We consider a semiparametric regression model that relates a normal outcome to covariates and a genetic pathway, where the covariate effects are modeled parametrically and the pathway effect of multiple gene expressions is modeled parametrically or nonparametrically using least-squares kernel machines (LSKMs). This unified framework allows a flexible function for the joint effect of multiple genes within a pathway by specifying a kernel function and allows for the possibility that each gene expression effect might be nonlinear and the genes within the same pathway are likely to interact with each other in a complicated way. This semiparametric model also makes it possible to test for the overall genetic pathway effect. We show that the LSKM semiparametric regression can be formulated using a linear mixed model. Estimation and inference hence can proceed within the linear mixed model framework using standard mixed model software. Both the regression coefficients of the covariate effects and the LSKM estimator of the genetic pathway effect can be obtained using the best linear unbiased predictor in the corresponding linear mixed model formulation. The smoothing parameter and the kernel parameter can be estimated as variance components using restricted maximum likelihood. A score test is developed to test for the genetic pathway effect. Model/variable selection within the LSKM framework is discussed. The methods are illustrated using a prostate cancer data set and evaluated using simulations. PMID:18078480

  1. Distributed weighted least-squares estimation with fast convergence for large-scale systems☆

    PubMed Central

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976

  2. Multi-frequency Phase Unwrap from Noisy Data: Adaptive Least Squares Approach

    NASA Astrophysics Data System (ADS)

    Katkovnik, Vladimir; Bioucas-Dias, José

    2010-04-01

    Multiple frequency interferometry is, basically, a phase acquisition strategy aimed at reducing or eliminating the ambiguity of the wrapped phase observations or, equivalently, reducing or eliminating the fringe ambiguity order. In multiple frequency interferometry, the phase measurements are acquired at different frequencies (or wavelengths) and recorded using the corresponding sensors (measurement channels). Assuming that the absolute phase to be reconstructed is piece-wise smooth, we use a nonparametric regression technique for the phase reconstruction. The nonparametric estimates are derived from a local least squares criterion, which, when applied to the multifrequency data, yields denoised (filtered) phase estimates with extended ambiguity (periodized), compared with the phase ambiguities inherent to each measurement frequency. The filtering algorithm is based on local polynomial (LPA) approximation for design of nonlinear filters (estimators) and adaptation of these filters to unknown smoothness of the spatially varying absolute phase [9]. For phase unwrapping, from filtered periodized data, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [1]. Simulations give evidence that the proposed algorithm yields state-of-the-art performance for continuous as well as for discontinues phase surfaces, enabling phase unwrapping in extraordinary difficult situations when all other algorithms fail.

  3. Entropy and generalized least square methods in assessment of the regional value of streamgages

    USGS Publications Warehouse

    Markus, M.; Vernon, Knapp H.; Tasker, Gary D.

    2003-01-01

    The Illinois State Water Survey performed a study to assess the streamgaging network in the State of Illinois. One of the important aspects of the study was to assess the regional value of each station through an assessment of the information transfer among gaging records for low, average, and high flow conditions. This analysis was performed for the main hydrologic regions in the State, and the stations were initially evaluated using a new approach based on entropy analysis. To determine the regional value of each station within a region, several information parameters, including total net information, were defined based on entropy. Stations were ranked based on the total net information. For comparison, the regional value of the same stations was assessed using the generalized least square regression (GLS) method, developed by the US Geological Survey. Finally, a hybrid combination of GLS and entropy was created by including a function of the negative net information as a penalty function in the GLS. The weights of the combined model were determined to maximize the average correlation with the results of GLS and entropy. The entropy and GLS methods were evaluated using the high-flow data from southern Illinois stations. The combined method was compared with the entropy and GLS approaches using the high-flow data from eastern Illinois stations. ?? 2003 Elsevier B.V. All rights reserved.

  4. An efficient recursive least square-based condition monitoring approach for a rail vehicle suspension system

    NASA Astrophysics Data System (ADS)

    Liu, X. Y.; Alfi, S.; Bruni, S.

    2016-06-01

    A model-based condition monitoring strategy for the railway vehicle suspension is proposed in this paper. This approach is based on recursive least square (RLS) algorithm focusing on the deterministic 'input-output' model. RLS has Kalman filtering feature and is able to identify the unknown parameters from a noisy dynamic system by memorising the correlation properties of variables. The identification of suspension parameter is achieved by machine learning of the relationship between excitation and response in a vehicle dynamic system. A fault detection method for the vertical primary suspension is illustrated as an instance of this condition monitoring scheme. Simulation results from the rail vehicle dynamics software 'ADTreS' are utilised as 'virtual measurements' considering a trailer car of Italian ETR500 high-speed train. The field test data from an E464 locomotive are also employed to validate the feasibility of this strategy for the real application. Results of the parameter identification performed indicate that estimated suspension parameters are consistent or approximate with the reference values. These results provide the supporting evidence that this fault diagnosis technique is capable of paving the way for the future vehicle condition monitoring system.

  5. Estimation of active pharmaceutical ingredients content using locally weighted partial least squares and statistical wavelength selection.

    PubMed

    Kim, Sanghong; Kano, Manabu; Nakagawa, Hiroshi; Hasebe, Shinji

    2011-12-15

    Development of quality estimation models using near infrared spectroscopy (NIRS) and multivariate analysis has been accelerated as a process analytical technology (PAT) tool in the pharmaceutical industry. Although linear regression methods such as partial least squares (PLS) are widely used, they cannot always achieve high estimation accuracy because physical and chemical properties of a measuring object have a complex effect on NIR spectra. In this research, locally weighted PLS (LW-PLS) which utilizes a newly defined similarity between samples is proposed to estimate active pharmaceutical ingredient (API) content in granules for tableting. In addition, a statistical wavelength selection method which quantifies the effect of API content and other factors on NIR spectra is proposed. LW-PLS and the proposed wavelength selection method were applied to real process data provided by Daiichi Sankyo Co., Ltd., and the estimation accuracy was improved by 38.6% in root mean square error of prediction (RMSEP) compared to the conventional PLS using wavelengths selected on the basis of variable importance on the projection (VIP). The results clearly show that the proposed calibration modeling technique is useful for API content estimation and is superior to the conventional one. PMID:22001843

  6. A bifurcation identifier for IV-OCT using orthogonal least squares and supervised machine learning.

    PubMed

    Macedo, Maysa M G; Guimarães, Welingson V N; Galon, Micheli Z; Takimura, Celso K; Lemos, Pedro A; Gutierrez, Marco Antonio

    2015-12-01

    Intravascular optical coherence tomography (IV-OCT) is an in-vivo imaging modality based on the intravascular introduction of a catheter which provides a view of the inner wall of blood vessels with a spatial resolution of 10-20 μm. Recent studies in IV-OCT have demonstrated the importance of the bifurcation regions. Therefore, the development of an automated tool to classify hundreds of coronary OCT frames as bifurcation or nonbifurcation can be an important step to improve automated methods for atherosclerotic plaques quantification, stent analysis and co-registration between different modalities. This paper describes a fully automated method to identify IV-OCT frames in bifurcation regions. The method is divided into lumen detection; feature extraction; and classification, providing a lumen area quantification, geometrical features of the cross-sectional lumen and labeled slices. This classification method is a combination of supervised machine learning algorithms and feature selection using orthogonal least squares methods. Training and tests were performed in sets with a maximum of 1460 human coronary OCT frames. The lumen segmentation achieved a mean difference of lumen area of 0.11 mm(2) compared with manual segmentation, and the AdaBoost classifier presented the best result reaching a F-measure score of 97.5% using 104 features. PMID:26433615

  7. Nucleus detection using gradient orientation information and linear least squares regression

    NASA Astrophysics Data System (ADS)

    Kwak, Jin Tae; Hewitt, Stephen M.; Xu, Sheng; Pinto, Peter A.; Wood, Bradford J.

    2015-03-01

    Computerized histopathology image analysis enables an objective, efficient, and quantitative assessment of digitized histopathology images. Such analysis often requires an accurate and efficient detection and segmentation of histological structures such as glands, cells and nuclei. The segmentation is used to characterize tissue specimens and to determine the disease status or outcomes. The segmentation of nuclei, in particular, is challenging due to the overlapping or clumped nuclei. Here, we propose a nuclei seed detection method for the individual and overlapping nuclei that utilizes the gradient orientation or direction information. The initial nuclei segmentation is provided by a multiview boosting approach. The angle of the gradient orientation is computed and traced for the nuclear boundaries. Taking the first derivative of the angle of the gradient orientation, high concavity points (junctions) are discovered. False junctions are found and removed by adopting a greedy search scheme with the goodness-of-fit statistic in a linear least squares sense. Then, the junctions determine boundary segments. Partial boundary segments belonging to the same nucleus are identified and combined by examining the overlapping area between them. Using the final set of the boundary segments, we generate the list of seeds in tissue images. The method achieved an overall precision of 0.89 and a recall of 0.88 in comparison to the manual segmentation.

  8. Prediction of protein-protein interactions based on protein-protein correlation using least squares regression.

    PubMed

    Huang, De-Shuang; Zhang, Lei; Han, Kyungsook; Deng, Suping; Yang, Kai; Zhang, Hongbo

    2014-01-01

    In order to transform protein sequences into the feature vectors, several works have been done, such as computing auto covariance (AC), conjoint triad (CT), local descriptor (LD), moran autocorrelation (MA), normalized moreaubroto autocorrelation (NMB) and so on. In this paper, we shall adopt these transformation methods to encode the proteins, respectively, where AC, CT, LD, MA and NMB are all represented by '+' in a unified manner. A new method, i.e. the combination of least squares regression with '+' (abbreviated as LSR(+)), will be introduced for encoding a protein-protein correlation-based feature representation and an interacting protein pair. Thus there are totally five different combinations for LSR(+), i.e. LSRAC, LSRCT, LSRLD, LSRMA and LSRNMB. As a result, we combined a support vector machine (SVM) approach with LSR(+) to predict protein-protein interactions (PPI) and PPI networks. The proposed method has been applied on four datasets, i.e. Saaccharomyces cerevisiae, Escherichia coli, Homo sapiens and Caenorhabditis elegans. The experimental results demonstrate that all LSR(+) methods outperform many existing representative algorithms. Therefore, LSR(+) is a powerful tool to characterize the protein-protein correlations and to infer PPI, whilst keeping high performance on prediction of PPI networks. PMID:25059329

  9. Passive shimming of a superconducting magnet using the L1-norm regularized least square algorithm

    NASA Astrophysics Data System (ADS)

    Kong, Xia; Zhu, Minhua; Xia, Ling; Wang, Qiuliang; Li, Yi; Zhu, Xuchen; Liu, Feng; Crozier, Stuart

    2016-02-01

    The uniformity of the static magnetic field B0 is of prime importance for an MRI system. The passive shimming technique is usually applied to improve the uniformity of the static field by optimizing the layout of a series of steel shims. The steel pieces are fixed in the drawers in the inner bore of the superconducting magnet, and produce a magnetizing field in the imaging region to compensate for the inhomogeneity of the B0 field. In practice, the total mass of steel used for shimming should be minimized, in addition to the field uniformity requirement. This is because the presence of steel shims may introduce a thermal stability problem. The passive shimming procedure is typically realized using the linear programming (LP) method. The LP approach however, is generally slow and also has difficulty balancing the field quality and the total amount of steel for shimming. In this paper, we have developed a new algorithm that is better able to balance the dual constraints of field uniformity and the total mass of the shims. The least square method is used to minimize the magnetic field inhomogeneity over the imaging surface with the total mass of steel being controlled by an L1-norm based constraint. The proposed algorithm has been tested with practical field data, and the results show that, with similar computational cost and mass of shim material, the new algorithm achieves superior field uniformity (43% better for the test case) compared with the conventional linear programming approach.

  10. Degree of deacetylation of chitosan by infrared spectroscopy and partial least squares.

    PubMed

    Dimzon, Ian Ken D; Knepper, Thomas P

    2015-01-01

    The determination of the degree of deacetylation of highly deacetylated chitosan by infrared (IR) spectroscopy was significantly improved with the use of partial least squares (PLS). The IR spectral region from 1500 to 1800 cm(-1) was taken as the dataset. Different PLS models resulting from various data pre-treatments were evaluated and compared. The PLS model that gave excellent internal and external validation performance came from the data that were corrected for the baseline and that was normalized relative to the maximum corrected absorbance. Analysis of the PLS loadings plot showed that the important variables in the spectral region came from the absorption maxima related to the amide bands at 1660 and 1550 cm(-1) and amine band at 1600 cm(-1). IR-PLS results were comparable to the results obtained by potentiometric titration. IR-PLS results were found to be more precise and rugged compared to the usual IR absorbance ratio method. This is consistent with the fact that IR spectral resolution is not really high and that the absorption at a single wavelength is influenced by other factors like hydrogen bonding and the presence of water. PMID:25316417

  11. Fishery landing forecasting using EMD-based least square support vector machine models

    NASA Astrophysics Data System (ADS)

    Shabri, Ani

    2015-05-01

    In this paper, the novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EMD) and least square support machine (LSSVM) is proposed to improve the accuracy of fishery landing forecasting. This hybrid is formulated specifically to address in modeling fishery landing, which has high nonlinear, non-stationary and seasonality time series which can hardly be properly modelled and accurately forecasted by traditional statistical models. In the hybrid model, EMD is used to decompose original data into a finite and often small number of sub-series. The each sub-series is modeled and forecasted by a LSSVM model. Finally the forecast of fishery landing is obtained by aggregating all forecasting results of sub-series. To assess the effectiveness and predictability of EMD-LSSVM, monthly fishery landing record data from East Johor of Peninsular Malaysia, have been used as a case study. The result shows that proposed model yield better forecasts than Autoregressive Integrated Moving Average (ARIMA), LSSVM and EMD-ARIMA models on several criteria..

  12. A technique to improve the accuracy of Earth orientation prediction algorithms based on least squares extrapolation

    NASA Astrophysics Data System (ADS)

    Guo, J. Y.; Li, Y. B.; Dai, C. L.; Shum, C. K.

    2013-10-01

    We present a technique to improve the least squares (LS) extrapolation of Earth orientation parameters (EOPs), consisting of fixing the last observed data point on the LS extrapolation curve, which customarily includes a polynomial and a few sinusoids. For the polar motion (PM), a more sophisticated two steps approach has been developed, which consists of estimating the amplitude of the more stable one of the annual (AW) and Chandler (CW) wobbles using data of longer time span, and then estimating the other parameters using a shorter time span. The technique is studied using hindcast experiments, and justified using year-by-year statistics of 8 years. In order to compare with the official predictions of the International Earth Rotation and Reference Systems Service (IERS) performed at the U.S. Navy Observatory (USNO), we have enforced short-term predictions by applying the ARIMA method to the residuals computed by subtracting the LS extrapolation curve from the observation data. The same as at USNO, we have also used atmospheric excitation function (AEF) to further improve predictions of UT1-UTC. As results, our short-term predictions are comparable to the USNO predictions, and our long-term predictions are marginally better, although not for every year. In addition, we have tested the use of AEF and oceanic excitation function (OEF) in PM prediction. We find that use of forecasts of AEF alone does not lead to any apparent improvement or worsening, while use of forecasts of AEF + OEF does lead to apparent improvement.

  13. IUPAC-consistent approach to the limit of detection in partial least-squares calibration.

    PubMed

    Allegrini, Franco; Olivieri, Alejandro C

    2014-08-01

    There is currently no well-defined procedure for providing the limit of detection (LOD) in multivariate calibration. Defining an estimator for the LOD in this scenario has shown to be more complex than intuitively extending the traditional univariate definition. For these reasons, although many attempts have been made to arrive at a reasonable convention, additional effort is required to achieve full agreement between the univariate and multivariate LOD definitions. In this work, a novel approach is presented to estimate the LOD in partial least-squares (PLS) calibration. Instead of a single LOD value, an interval of LODs is provided, which depends on the variation of the background composition in the calibration space. This is in contrast with previously proposed univariate extensions of the LOD concept. With the present definition, the LOD interval becomes a parameter characterizing the overall PLS calibration model, and not each test sample in particular, as has been proposed in the past. The new approach takes into account IUPAC official recommendations, and also the latest developments in error-in-variables theory for PLS calibration. Both simulated and real analytical systems have been studied for illustrating the properties of the new LOD concept. PMID:25008998

  14. Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches

    USGS Publications Warehouse

    Brooks, Wesley R.; Fienen, Michael N.; Corsi, Steven R.

    2013-01-01

    At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.

  15. [UV spectroscopy coupled with partial least squares to determine the enantiomeric composition in chiral drugs].

    PubMed

    Li, Qian-qian; Wu, Li-jun; Liu, Wei; Cao, Jin-li; Duan, Jia; Huang, Yue; Min, Shun-geng

    2012-02-01

    In the present study, sucrose was used as a chiral selector to detect the molar fraction of R-metalaxyl and S-ibuprofen due to the UV spectral difference caused by the interaction of the R- and S-isomer with sucrose. The quantitative model of the molar fraction of R-metalaxyl was established by partial least squares (PLS) regression and the robustness of the models was evaluated by 6 independent validation samples. The determination coefficient R2 and the standard error of calibration set (SEC) was 99.98% and 0.003 respectively. The correlation coefficient of estimated value and specified value, the standard error and the relative standard deviation (RSD) of the independent validation samples was 0.999 8, 0.000 4 and 0.054% respectively. The quantitative models of the molar fraction of S-ibuprofen were established by PLS and the robustness of models was evaluated. The determination coefficient R2 and the standard error of calibration set (SEC) was 99.82% and 0.007 respectively. The correlation coefficient of estimated value and specified value of the independent validation samples was 0.998 1. The standard error of prediction (SEP) was 0.002 and the relative standard deviation (RSD) was 0.2%. The result demonstrates that sucrose is an ideal chiral selector for building a stable regression model to determine the enantiomeric composition. PMID:22512198

  16. Large-scale computation of incompressible viscous flow by least-squares finite element method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.

  17. Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression

    SciTech Connect

    Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards; New, Joshua Ryan

    2013-01-01

    Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-fold cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.

  18. Eddy current characterization of small cracks using least square support vector machine

    NASA Astrophysics Data System (ADS)

    Chelabi, M.; Hacib, T.; Le Bihan, Y.; Ikhlef, N.; Boughedda, H.; Mekideche, M. R.

    2016-04-01

    Eddy current (EC) sensors are used for non-destructive testing since they are able to probe conductive materials. Despite being a conventional technique for defect detection and localization, the main weakness of this technique is that defect characterization, of the exact determination of the shape and dimension, is still a question to be answered. In this work, we demonstrate the capability of small crack sizing using signals acquired from an EC sensor. We report our effort to develop a systematic approach to estimate the size of rectangular and thin defects (length and depth) in a conductive plate. The achieved approach by the novel combination of a finite element method (FEM) with a statistical learning method is called least square support vector machines (LS-SVM). First, we use the FEM to design the forward problem. Next, an algorithm is used to find an adaptive database. Finally, the LS-SVM is used to solve the inverse problems, creating polynomial functions able to approximate the correlation between the crack dimension and the signal picked up from the EC sensor. Several methods are used to find the parameters of the LS-SVM. In this study, the particle swarm optimization (PSO) and genetic algorithm (GA) are proposed for tuning the LS-SVM. The results of the design and the inversions were compared to both simulated and experimental data, with accuracy experimentally verified. These suggested results prove the applicability of the presented approach.

  19. Multidimensional model of apathy in older adults using partial least squares-path modeling.

    PubMed

    Raffard, Stéphane; Bortolon, Catherine; Burca, Marianna; Gely-Nargeot, Marie-Christine; Capdevielle, Delphine

    2016-06-01

    Apathy defined as a mental state characterized by a lack of goal-directed behavior is prevalent and associated with poor functioning in older adults. The main objective of this study was to identify factors contributing to the distinct dimensions of apathy (cognitive, emotional, and behavioral) in older adults without dementia. One hundred and fifty participants (mean age, 80.42) completed self-rated questionnaires assessing apathy, emotional distress, anticipatory pleasure, motivational systems, physical functioning, quality of life, and cognitive functioning. Data were analyzed using partial least squares variance-based structural equation modeling in order to examine factors contributing to the three different dimensions of apathy in our sample. Overall, the different facets of apathy were associated with cognitive functioning, anticipatory pleasure, sensitivity to reward, and physical functioning, but the contribution of these different factors to the three dimensions of apathy differed significantly. More specifically, the impact of anticipatory pleasure and physical functioning was stronger for the cognitive than for emotional apathy. Conversely, the impact of sensibility to reward, although small, was slightly stronger on emotional apathy. Regarding behavioral apathy, again we found similar latent variables except for the cognitive functioning whose impact was not statistically significant. Our results highlight the need to take into account various mechanisms involved in the different facets of apathy in older adults without dementia, including not only cognitive factors but also motivational variables and aspects related to physical disability. Clinical implications are discussed. PMID:27153818

  20. SIMULTANEOUS BACKSCATTER AND ATTENUATION ESTIMATION USING A LEAST SQUARES METHOD WITH CONSTRAINTS

    PubMed Central

    Nam, Kibo; Zagzebski, James A.; Hall, Timothy J.

    2011-01-01

    Backscatter and attenuation variations are essential contrast mechanisms in ultrasound B-mode imaging. Emerging Quantitative Ultrasound methods extract and display absolute values of these tissue properties. However, in clinical applications, backscatter and attenuation parameters sometimes are not easily measured because of tissues inhomogeneities above the region of interest. We describe a least squares method (LSM) that fits the echo signal power spectra from a region of interest (ROI) to a 3-parameter tissue model that simultaneously yields estimates of attenuation losses and backscatter coefficients. To test the method, tissue-mimicking phantoms with backscatter and attenuation contrast as well as uniform phantoms were scanned with linear array transducers on a Siemens S2000. Attenuation and backscatter coefficients estimated by the LSM were compared with those derived using a reference phantom method (Yao et al. 1990). Results show that the LSM yields effective attenuation coefficients for uniform phantoms comparable to values derived using the reference phantom method. For layered phantoms exhibiting non-uniform backscatter, the LSM resulted in smaller attenuation estimation errors than the reference phantom method. Backscatter coefficients derived using the LSM were in excellent agreement with values obtained from laboratory measurements on test samples and with theory. The LSM is more immune to depth-dependent backscatter changes than commonly used reference phantom methods. PMID:21963038

  1. [Quantitative analysis of alloy steel based on laser induced breakdown spectroscopy with partial least squares method].

    PubMed

    Cong, Zhi-Bo; Sun, Lan-Xiang; Xin, Yong; Li, Yang; Qi, Li-Feng; Yang, Zhi-Jia

    2014-02-01

    In the present paper both the partial least squares (PLS) method and the calibration curve (CC) method are used to quantitatively analyze the laser induced breakdown spectroscopy data obtained from the standard alloy steel samples. Both the major and trace elements were quantitatively analyzed. By comparing the results of two different calibration methods some useful results were obtained: for major elements, the PLS method is better than the CC method in quantitative analysis; more importantly, for the trace elements, the CC method can not give the quantitative results due to the extremely weak characteristic spectral lines, but the PLS method still has a good ability of quantitative analysis. And the regression coefficient of PLS method is compared with the original spectral data with background interference to explain the advantage of the PLS method in the LIBS quantitative analysis. Results proved that the PLS method used in laser induced breakdown spectroscopy is suitable for quantitative analysis of trace elements such as C in the metallurgical industry. PMID:24822436

  2. A Nonlinear Least Squares Approach to Time of Death Estimation Via Body Cooling.

    PubMed

    Rodrigo, Marianito R

    2016-01-01

    The problem of time of death (TOD) estimation by body cooling is revisited by proposing a nonlinear least squares approach that takes as input a series of temperature readings only. Using a reformulation of the Marshall-Hoare double exponential formula and a technique for reducing the dimension of the state space, an error function that depends on the two cooling rates is constructed, with the aim of minimizing this function. Standard nonlinear optimization methods that are used to minimize the bivariate error function require an initial guess for these unknown rates. Hence, a systematic procedure based on the given temperature data is also proposed to determine an initial estimate for the rates. Then, an explicit formula for the TOD is given. Results of numerical simulations using both theoretical and experimental data are presented, both yielding reasonable estimates. The proposed procedure does not require knowledge of the temperature at death nor the body mass. In fact, the method allows the estimation of the temperature at death once the cooling rates and the TOD have been calculated. The procedure requires at least three temperature readings, although more measured readings could improve the estimates. With the aid of computerized recording and thermocouple detectors, temperature readings spaced 10-15 min apart, for example, can be taken. The formulas can be straightforwardly programmed and installed on a hand-held device for field use. PMID:26213145

  3. Evaluation of unconfined-aquifer parameters from pumping test data by nonlinear least squares

    USGS Publications Warehouse

    Heidari, M.; Moench, A.

    1997-01-01

    Nonlinear least squares (NLS) with automatic differentiation was used to estimate aquifer parameters from drawdown data obtained from published pumping tests conducted in homogeneous, water-table aquifers. The method is based on a technique that seeks to minimize the squares of residuals between observed and calculated drawdown subject to bounds that are placed on the parameter of interest. The analytical model developed by Neuman for flow to a partially penetrating well of infinitesimal diameter situated in an infinite, homogeneous and anisotropic aquifer was used to obtain calculated drawdown. NLS was first applied to synthetic drawdown data from a hypothetical but realistic aquifer to demonstrate that the relevant hydraulic parameters (storativity, specific yield, and horizontal and vertical hydraulic conductivity) can be evaluated accurately. Next the method was used to estimate the parameters at three field sites with widely varying hydraulic properties. NLS produced unbiased estimates of the aquifer parameters that are close to the estimates obtained with the same data using a visual curve-matching approach. Small differences in the estimates are a consequence of subjective interpretation introduced in the visual approach.

  4. Radioisotopic neutron transmission spectrometry: Quantitative analysis by using partial least-squares method.

    PubMed

    Kim, Jong-Yun; Choi, Yong Suk; Park, Yong Joon; Jung, Sung-Hee

    2009-01-01

    Neutron spectrometry, based on the scattering of high energy fast neutrons from a radioisotope and slowing-down by the light hydrogen atoms, is a useful technique for non-destructive, quantitative measurement of hydrogen content because it has a large measuring volume, and is not affected by temperature, pressure, pH value and color. The most common choice for radioisotope neutron source is (252)Cf or (241)Am-Be. In this study, (252)Cf with a neutron flux of 6.3x10(6)n/s has been used as an attractive neutron source because of its high flux neutron and weak radioactivity. Pulse-height neutron spectra have been obtained by using in-house built radioisotopic neutron spectrometric system equipped with (3)He detector and multi-channel analyzer, including a neutron shield. As a preliminary study, polyethylene block (density of approximately 0.947g/cc and area of 40cmx25cm) was used for the determination of hydrogen content by using multivariate calibration models, depending on the thickness of the block. Compared with the results obtained from a simple linear calibration model, partial least-squares regression (PLSR) method offered a better performance in a quantitative data analysis. It also revealed that the PLSR method in a neutron spectrometric system can be promising in the real-time, online monitoring of the powder process to determine the content of any type of molecules containing hydrogen nuclei. PMID:19285419

  5. Metafitting: Weight optimization for least-squares fitting of PTTI data

    NASA Technical Reports Server (NTRS)

    Douglas, Rob J.; Boulanger, J.-S.

    1995-01-01

    For precise time intercomparisons between a master frequency standard and a slave time scale, we have found it useful to quantitatively compare different fitting strategies by examining the standard uncertainty in time or average frequency. It is particularly useful when designing procedures which use intermittent intercomparisons, with some parameterized fit used to interpolate or extrapolate from the calibrating intercomparisons. We use the term 'metafitting' for the choices that are made before a fitting procedure is operationally adopted. We present methods for calculating the standard uncertainty for general, weighted least-squares fits and a method for optimizing these weights for a general noise model suitable for many PTTI applications. We present the results of the metafitting of procedures for the use of a regular schedule of (hypothetical) high-accuracy frequency calibration of a maser time scale. We have identified a cumulative series of improvements that give a significant reduction of the expected standard uncertainty, compared to the simplest procedure of resetting the maser synthesizer after each calibration. The metafitting improvements presented include the optimum choice of weights for the calibration runs, optimized over a period of a week or 10 days.

  6. A Least Square Method Based Model for Identifying Protein Complexes in Protein-Protein Interaction Network

    PubMed Central

    Dai, Qiguo; Guo, Maozu; Guo, Yingjie; Liu, Xiaoyan; Liu, Yang; Teng, Zhixia

    2014-01-01

    Protein complex formed by a group of physical interacting proteins plays a crucial role in cell activities. Great effort has been made to computationally identify protein complexes from protein-protein interaction (PPI) network. However, the accuracy of the prediction is still far from being satisfactory, because the topological structures of protein complexes in the PPI network are too complicated. This paper proposes a novel optimization framework to detect complexes from PPI network, named PLSMC. The method is on the basis of the fact that if two proteins are in a common complex, they are likely to be interacting. PLSMC employs this relation to determine complexes by a penalized least squares method. PLSMC is applied to several public yeast PPI networks, and compared with several state-of-the-art methods. The results indicate that PLSMC outperforms other methods. In particular, complexes predicted by PLSMC can match known complexes with a higher accuracy than other methods. Furthermore, the predicted complexes have high functional homogeneity. PMID:25405206

  7. Relative Scale Estimation and 3D Registration of Multi-Modal Geometry Using Growing Least Squares.

    PubMed

    Mellado, Nicolas; Dellepiane, Matteo; Scopigno, Roberto

    2016-09-01

    The advent of low cost scanning devices and the improvement of multi-view stereo techniques have made the acquisition of 3D geometry ubiquitous. Data gathered from different devices, however, result in large variations in detail, scale, and coverage. Registration of such data is essential before visualizing, comparing and archiving them. However, state-of-the-art methods for geometry registration cannot be directly applied due to intrinsic differences between the models, e.g., sampling, scale, noise. In this paper we present a method for the automatic registration of multi-modal geometric data, i.e., acquired by devices with different properties (e.g., resolution, noise, data scaling). The method uses a descriptor based on Growing Least Squares, and is robust to noise, variation in sampling density, details, and enables scale-invariant matching. It allows not only the measurement of the similarity between the geometry surrounding two points, but also the estimation of their relative scale. As it is computed locally, it can be used to analyze large point clouds composed of millions of points. We implemented our approach in two registration procedures (assisted and automatic) and applied them successfully on a number of synthetic and real cases. We show that using our method, multi-modal models can be automatically registered, regardless of their differences in noise, detail, scale, and unknown relative coverage. PMID:26672045

  8. Empirical mode decomposition coupled with least square support vector machine for river flow forecasting

    NASA Astrophysics Data System (ADS)

    Ismail, Shuhaida; Shabri, Ani; Abadan, Siti Sarah

    2015-02-01

    This paper aims to investigate the ability of Empirical Mode Decompositio n (EMD) coupled with Least Square Support Vector Machine (LSSVM) model in order to improve the accuracy of river flow forecasting. To assess the effectiveness of this model, Bernam monthly river flow data, has served as the case study. The proposed model was set at three important stages which are decomposition, component identification and forecasting stages respectively. The first stage is known as decomposition stage where EMD were employed for decomposing the dataset into several numbers of Intrinsic Mode Functions (IMF) and a residue. During on second stage, the meaningful signals are identified using a statistical measure and the new dataset are obtained in this stage. The final stage applied LSSVM as a forecasting tool to perform the river flow forecasting. The performance of the EMD coupled with LSSVM model is compared with the single LSSVM models using various statistics measures of Mean Absolute Error (MAE), Root Mean Square Error (RMSE), correlation-coefficient (R) and Correlation of Efficiency (CE). The comparison results reveal the proposed model of EMD coupled with LSSVM model serves as a useful tool and a promising new method for river flow forecasting.

  9. Retrieve the evaporation duct height by least-squares support vector machine algorithm

    NASA Astrophysics Data System (ADS)

    Douvenot, Remi; Fabbro, Vincent; Bourlier, Christophe; Saillard, Joseph; Fuchs, Hans-Hellmuth; Essen, Helmut; Foerster, Joerg

    2009-01-01

    The detection and tracking of naval targets, including low Radar Cross Section (RCS) objects like inflatable boats or sea skimming missiles requires a thorough knowledge of the propagation properties of the maritime boundary layer. Models are in existence, which allow a prediction of the propagation factor using the parabolic equation algorithm. As a necessary input, the refractive index has to be known. This index, however, is strongly influenced by the actual atmospheric conditions, characterized mainly by temperature, humidity and air pressure. An approach is initiated to retrieve the vertical profile of the refractive index from the propagation factor measured on an onboard target. The method is based on the LS-SVM (Least-Squares Support Vector Machines) theory. The inversion method is here used to determine refractive index from data measured during the VAMPIRA campaign (Validation Measurement for Propagation in the Infrared and RAdar) conducted as a multinational approach over a transmission path across the Baltic Sea. As a propagation factor has been measured on two reference reflectors mounted onboard a naval vessel at different heights, the inversion method can be tested on both heights. The paper describes the experimental campaign and validates the LS-SVM inversion method for refractivity from propagation factor on simple measured data.

  10. Gene Function Prediction from Functional Association Networks Using Kernel Partial Least Squares Regression

    PubMed Central

    Lehtinen, Sonja; Lees, Jon; Bähler, Jürg; Shawe-Taylor, John; Orengo, Christine

    2015-01-01

    With the growing availability of large-scale biological datasets, automated methods of extracting functionally meaningful information from this data are becoming increasingly important. Data relating to functional association between genes or proteins, such as co-expression or functional association, is often represented in terms of gene or protein networks. Several methods of predicting gene function from these networks have been proposed. However, evaluating the relative performance of these algorithms may not be trivial: concerns have been raised over biases in different benchmarking methods and datasets, particularly relating to non-independence of functional association data and test data. In this paper we propose a new network-based gene function prediction algorithm using a commute-time kernel and partial least squares regression (Compass). We compare Compass to GeneMANIA, a leading network-based prediction algorithm, using a number of different benchmarks, and find that Compass outperforms GeneMANIA on these benchmarks. We also explicitly explore problems associated with the non-independence of functional association data and test data. We find that a benchmark based on the Gene Ontology database, which, directly or indirectly, incorporates information from other databases, may considerably overestimate the performance of algorithms exploiting functional association data for prediction. PMID:26288239

  11. Multicoil Dixon chemical species separation with an iterative least-squares estimation method.

    PubMed

    Reeder, Scott B; Wen, Zhifei; Yu, Huanzhou; Pineda, Angel R; Gold, Garry E; Markl, Michael; Pelc, Norbert J

    2004-01-01

    This work describes a new approach to multipoint Dixon fat-water separation that is amenable to pulse sequences that require short echo time (TE) increments, such as steady-state free precession (SSFP) and fast spin-echo (FSE) imaging. Using an iterative linear least-squares method that decomposes water and fat images from source images acquired at short TE increments, images with a high signal-to-noise ratio (SNR) and uniform separation of water and fat are obtained. This algorithm extends to multicoil reconstruction with minimal additional complexity. Examples of single- and multicoil fat-water decompositions are shown from source images acquired at both 1.5T and 3.0T. Examples in the knee, ankle, pelvis, abdomen, and heart are shown, using FSE, SSFP, and spoiled gradient-echo (SPGR) pulse sequences. The algorithm was applied to systems with multiple chemical species, and an example of water-fat-silicone separation is shown. An analysis of the noise performance of this method is described, and methods to improve noise performance through multicoil acquisition and field map smoothing are discussed. PMID:14705043

  12. Least Squares Evaluations for Form and Profile Errors of Ellipse Using Coordinate Data

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Xu, Guanghua; Liang, Lin; Zhang, Qing; Liu, Dan

    2016-04-01

    To improve the measurement and evaluation of form error of an elliptic section, an evaluation method based on least squares fitting is investigated to analyze the form and profile errors of an ellipse using coordinate data. Two error indicators for defining ellipticity are discussed, namely the form error and the profile error, and the difference between both is considered as the main parameter for evaluating machining quality of surface and profile. Because the form error and the profile error rely on different evaluation benchmarks, the major axis and the foci rather than the centre of an ellipse are used as the evaluation benchmarks and can accurately evaluate a tolerance range with the separated form error and profile error of workpiece. Additionally, an evaluation program based on the LS model is developed to extract the form error and the profile error of the elliptic section, which is well suited for separating the two errors by a standard program. Finally, the evaluation method about the form and profile errors of the ellipse is applied to the measurement of skirt line of the piston, and results indicate the effectiveness of the evaluation. This approach provides the new evaluation indicators for the measurement of form and profile errors of ellipse, which is found to have better accuracy and can thus be used to solve the difficult of the measurement and evaluation of the piston in industrial production.

  13. A least-squares parameter estimation algorithm for switched hammerstein systems with applications to the VOR

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Kearney, Robert E.; Galiana, Henrietta L.

    2005-01-01

    A "Multimode" or "switched" system is one that switches between various modes of operation. When a switch occurs from one mode to another, a discontinuity may result followed by a smooth evolution under the new regime. Characterizing the switching behavior of these systems is not well understood and, therefore, identification of multimode systems typically requires a preprocessing step to classify the observed data according to a mode of operation. A further consequence of the switched nature of these systems is that data available for parameter estimation of any subsystem may be inadequate. As such, identification and parameter estimation of multimode systems remains an unresolved problem. In this paper, we 1) show that the NARMAX model structure can be used to describe the impulsive-smooth behavior of switched systems, 2) propose a modified extended least squares (MELS) algorithm to estimate the coefficients of such models, and 3) demonstrate its applicability to simulated and real data from the Vestibulo-Ocular Reflex (VOR). The approach will also allow the identification of other nonlinear bio-systems, suspected of containing "hard" nonlinearities.

  14. An intelligent diagnosis method for rotating machinery using least squares mapping and a fuzzy neural network.

    PubMed

    Li, Ke; Chen, Peng; Wang, Shiming

    2012-01-01

    This study proposes a new condition diagnosis method for rotating machinery developed using least squares mapping (LSM) and a fuzzy neural network. The non-dimensional symptom parameters (NSPs) in the time domain are defined to reflect the features of the vibration signals measured in each state. A sensitive evaluation method for selecting good symptom parameters using detection index (DI) is also proposed for detecting and distinguishing faults in rotating machinery. In order to raise the diagnosis sensitivity of the symptom parameters the synthetic symptom parameters (SSPs) are obtained by LSM. Moreover, possibility theory and the Dempster & Shafer theory (DST) are used to process the ambiguous relationship between symptoms and fault types. Finally, a sequential diagnosis method, using sequential inference and a fuzzy neural network realized by the partially-linearized neural network (PLNN), is also proposed, by which the conditions of rotating machinery can be identified sequentially. Practical examples of fault diagnosis for a roller bearing are shown to verify that the method is effective. PMID:22778622

  15. A Nonlinear Adaptive Beamforming Algorithm Based on Least Squares Support Vector Regression

    PubMed Central

    Wang, Lutao; Jin, Gang; Li, Zhengzhou; Xu, Hongbin

    2012-01-01

    To overcome the performance degradation in the presence of steering vector mismatches, strict restrictions on the number of available snapshots, and numerous interferences, a novel beamforming approach based on nonlinear least-square support vector regression machine (LS-SVR) is derived in this paper. In this approach, the conventional linearly constrained minimum variance cost function used by minimum variance distortionless response (MVDR) beamformer is replaced by a squared-loss function to increase robustness in complex scenarios and provide additional control over the sidelobe level. Gaussian kernels are also used to obtain better generalization capacity. This novel approach has two highlights, one is a recursive regression procedure to estimate the weight vectors on real-time, the other is a sparse model with novelty criterion to reduce the final size of the beamformer. The analysis and simulation tests show that the proposed approach offers better noise suppression capability and achieve near optimal signal-to-interference-and-noise ratio (SINR) with a low computational burden, as compared to other recently proposed robust beamforming techniques.

  16. A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting.

    PubMed

    Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue

    2016-01-01

    The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals. PMID:27483275

  17. Partial least squares correlation of multivariate cognitive abilities and local brain structure in children and adolescents.

    PubMed

    Ziegler, G; Dahnke, R; Winkler, A D; Gaser, C

    2013-11-15

    Intelligent behavior is not a one-dimensional phenomenon. Individual differences in human cognitive abilities might be therefore described by a 'cognitive manifold' of intercorrelated tests from partially independent domains of general intelligence and executive functions. However, the relationship between these individual differences and brain morphology is not yet fully understood. Here we take a multivariate approach to analyzing covariations across individuals in two feature spaces: the low-dimensional space of cognitive ability subtests and the high-dimensional space of local gray matter volume obtained from voxel-based morphometry. By exploiting a partial least squares correlation framework in a large sample of 286 healthy children and adolescents, we identify directions of maximum covariance between both spaces in terms of latent variable modeling. We obtain an orthogonal set of latent variables representing commonalities in the brain-behavior system, which emphasize specific neuronal networks involved in cognitive ability differences. We further explore the early lifespan maturation of the covariance between cognitive abilities and local gray matter volume. The dominant latent variable revealed positive weights across widespread gray matter regions (in the brain domain) and the strongest weights for parents' ratings of children's executive function (in the cognitive domain). The obtained latent variables for brain and cognitive abilities exhibited moderate correlations of 0.46-0.6. Moreover, the multivariate modeling revealed indications for a heterochronic formation of the association as a process of brain maturation across different age groups. PMID:23727321

  18. Partial Least Square Discriminant Analysis Discovered a Dietary Pattern Inversely Associated with Nasopharyngeal Carcinoma Risk

    PubMed Central

    Lo, Yen-Li; Pan, Wen-Harn; Hsu, Wan-Lun; Chien, Yin-Chu; Chen, Jen-Yang; Hsu, Mow-Ming; Lou, Pei-Jen; Chen, I-How; Hildesheim, Allan; Chen, Chien-Jen

    2016-01-01

    Evidence on the association between dietary component, dietary pattern and nasopharyngeal carcinoma (NPC) is scarce. A major challenge is the high degree of correlation among dietary constituents. We aimed to identify dietary pattern associated with NPC and to illustrate the dose-response relationship between the identified dietary pattern scores and the risk of NPC. Taking advantage of a matched NPC case–control study, data from a total of 319 incident cases and 319 matched controls were analyzed. Dietary pattern was derived employing partial least square discriminant analysis (PLS-DA) performed on energy-adjusted food frequencies derived from a 66-item food-frequency questionnaire. Odds ratios (ORs) and 95% confidence intervals (CIs) were estimated with multiple conditional logistic regression models, linking pattern scores and NPC risk. A high score of the PLS-DA derived pattern was characterized by high intakes of fruits, milk, fresh fish, vegetables, tea, and eggs ordered by loading values. We observed that one unit increase in the scores was associated with a significantly lower risk of NPC (ORadj = 0.73, 95% CI = 0.60–0.88) after controlling for potential confounders. Similar results were observed among Epstein-Barr virus seropositive subjects. An NPC protective diet is indicated with more phytonutrient-rich plant foods (fruits, vegetables), milk, other protein-rich foods (in particular fresh fish and eggs), and tea. This information may be used to design potential dietary regimen for NPC prevention. PMID:27249558

  19. Towards a Generic Method for Building-Parcel Vector Data Adjustment by Least Squares

    NASA Astrophysics Data System (ADS)

    Méneroux, Y.; Brasebin, M.

    2015-08-01

    Being able to merge high quality and complete building models with parcel data is of a paramount importance for any application dealing with urban planning. However since parcel boundaries often stand for the legal reference frame, the whole correction will be exclusively done on building features. Then a major task is to identify spatial relationships and properties that buildings should keep through the conflation process. The purpose of this paper is to describe a method based on least squares approach to ensure that buildings fit consistently into parcels while abiding by a set of standard constraints that may concern most of urban applications. An important asset of our model is that it can be easily extended to comply with more specific constraints. In addition, results analysis also demonstrates that it provides significantly better output than a basic algorithm relying on an individual correction of features, especially regarding conservation of metrics and topological relationships between buildings. In the future, we would like to include more specific constraints to retrieve the actual positions of buildings relatively to parcel borders and we plan to assess the contribution of our algorithm on the quality of urban application outputs.

  20. Multilocus Association Testing of Quantitative Traits Based on Partial Least-Squares Analysis

    PubMed Central

    Zhang, Feng; Guo, Xiong; Deng, Hong-Wen

    2011-01-01

    Because of combining the genetic information of multiple loci, multilocus association studies (MLAS) are expected to be more powerful than single locus association studies (SLAS) in disease genes mapping. However, some researchers found that MLAS had similar or reduced power relative to SLAS, which was partly attributed to the increased degrees of freedom (dfs) in MLAS. Based on partial least-squares (PLS) analysis, we develop a MLAS approach, while avoiding large dfs in MLAS. In this approach, genotypes are first decomposed into the PLS components that not only capture majority of the genetic information of multiple loci, but also are relevant for target traits. The extracted PLS components are then regressed on target traits to detect association under multilinear regression. Simulation study based on real data from the HapMap project were used to assess the performance of our PLS-based MLAS as well as other popular multilinear regression-based MLAS approaches under various scenarios, considering genetic effects and linkage disequilibrium structure of candidate genetic regions. Using PLS-based MLAS approach, we conducted a genome-wide MLAS of lean body mass, and compared it with our previous genome-wide SLAS of lean body mass. Simulations and real data analyses results support the improved power of our PLS-based MLAS in disease genes mapping relative to other three MLAS approaches investigated in this study. We aim to provide an effective and powerful MLAS approach, which may help to overcome the limitations of SLAS in disease genes mapping. PMID:21304821

  1. Quantitative analysis of mixed hydrofluoric and nitric acids using Raman spectroscopy with partial least squares regression.

    PubMed

    Kang, Gumin; Lee, Kwangchil; Park, Haesung; Lee, Jinho; Jung, Youngjean; Kim, Kyoungsik; Son, Boongho; Park, Hyoungkuk

    2010-06-15

    Mixed hydrofluoric and nitric acids are widely used as a good etchant for the pickling process of stainless steels. The cost reduction and the procedure optimization in the manufacturing process can be facilitated by optically detecting the concentration of the mixed acids. In this work, we developed a novel method which allows us to obtain the concentrations of hydrofluoric acid (HF) and nitric acid (HNO(3)) mixture samples with high accuracy. The experiments were carried out for the mixed acids which consist of the HF (0.5-3wt%) and the HNO(3) (2-12wt%) at room temperature. Fourier Transform Raman spectroscopy has been utilized to measure the concentration of the mixed acids HF and HNO(3), because the mixture sample has several strong Raman bands caused by the vibrational mode of each acid in this spectrum. The calibration of spectral data has been performed using the partial least squares regression method which is ideal for local range data treatment. Several figures of merit (FOM) were calculated using the concept of net analyte signal (NAS) to evaluate performance of our methodology. PMID:20441916

  2. Amplitude differences least squares method applied to temporal cardiac beat alignment

    NASA Astrophysics Data System (ADS)

    Correa, R. O.; Laciar, E.; Valentinuzzi, M. E.

    2007-11-01

    High resolution averaged ECG is an important diagnostic technique in post-infarcted and/or chagasic patients with high risk of ventricular tachycardia (VT). It calls for precise determination of the synchronism point (fiducial point) in each beat to be averaged. Cross-correlation (CC) between each detected beat and a reference beat is, by and large, the standard alignment procedure. However, the fiducial point determination is not precise in records contaminated with high levels of noise. Herein, we propose an alignment procedure based on the least squares calculation of the amplitude differences (LSAD) between the ECG samples and a reference or template beat. Both techniques, CC and LSAD, were tested in high resolution ECG's corrupted with white noise and 50 Hz line interference of varying amplitudes (RMS range: 0-100μV). Results point out that LSDA produced a lower alignment error in all contaminated records while in those blurred by power line interference better results were found only within the 0-40 μV range. It is concluded that the proposed method represents a valid alignment alternative.

  3. Improved prediction of drug-target interactions using regularized least squares integrating with kernel fusion technique.

    PubMed

    Hao, Ming; Wang, Yanli; Bryant, Stephen H

    2016-02-25

    Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision-recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. PMID:26851083

  4. Generalized Least-Squares CT Reconstruction with Detector Blur and Correlated Noise Models.

    PubMed

    Stayman, J Webster; Zbijewski, Wojciech; Tilley, Steven; Siewerdsen, Jeffrey

    2014-03-19

    The success and improved dose utilization of statistical reconstruction methods arises, in part, from their ability to incorporate sophisticated models of the physics of the measurement process and noise. Despite the great promise of statistical methods, typical measurement models ignore blurring effects, and nearly all current approaches make the presumption of independent measurements - disregarding noise correlations and a potential avenue for improved image quality. In some imaging systems, such as flat-panel-based cone-beam CT, such correlations and blurs can be a dominant factor in limiting the maximum achievable spatial resolution and noise performance. In this work, we propose a novel regularized generalized least-squares reconstruction method that includes models for both system blur and correlated noise in the projection data. We demonstrate, in simulation studies, that this approach can break through the traditional spatial resolution limits of methods that do not model these physical effects. Moreover, in comparison to other approaches that attempt deblurring without a correlation model, superior noise-resolution trade-offs can be found with the proposed approach. PMID:25328638

  5. Prediction for human intelligence using morphometric characteristics of cortical surface: partial least square analysis.

    PubMed

    Yang, J-J; Yoon, U; Yun, H J; Im, K; Choi, Y Y; Lee, K H; Park, H; Hough, M G; Lee, J-M

    2013-08-29

    A number of imaging studies have reported neuroanatomical correlates of human intelligence with various morphological characteristics of the cerebral cortex. However, it is not yet clear whether these morphological properties of the cerebral cortex account for human intelligence. We assumed that the complex structure of the cerebral cortex could be explained effectively considering cortical thickness, surface area, sulcal depth and absolute mean curvature together. In 78 young healthy adults (age range: 17-27, male/female: 39/39), we used the full-scale intelligence quotient (FSIQ) and the cortical measurements calculated in native space from each subject to determine how much combining various cortical measures explained human intelligence. Since each cortical measure is thought to be not independent but highly inter-related, we applied partial least square (PLS) regression, which is one of the most promising multivariate analysis approaches, to overcome multicollinearity among cortical measures. Our results showed that 30% of FSIQ was explained by the first latent variable extracted from PLS regression analysis. Although it is difficult to relate the first derived latent variable with specific anatomy, we found that cortical thickness measures had a substantial impact on the PLS model supporting the most significant factor accounting for FSIQ. Our results presented here strongly suggest that the new predictor combining different morphometric properties of complex cortical structure is well suited for predicting human intelligence. PMID:23643979

  6. Kinetic microplate bioassays for relative potency of antibiotics improved by partial Least Square (PLS) regression.

    PubMed

    Francisco, Fabiane Lacerda; Saviano, Alessandro Morais; Almeida, Túlia de Souza Botelho; Lourenço, Felipe Rebello

    2016-05-01

    Microbiological assays are widely used to estimate the relative potencies of antibiotics in order to guarantee the efficacy, safety, and quality of drug products. Despite of the advantages of turbidimetric bioassays when compared to other methods, it has limitations concerning the linearity and range of the dose-response curve determination. Here, we proposed to use partial least squares (PLS) regression to solve these limitations and to improve the prediction of relative potencies of antibiotics. Kinetic-reading microplate turbidimetric bioassays for apramacyin and vancomycin were performed using Escherichia coli (ATCC 8739) and Bacillus subtilis (ATCC 6633), respectively. Microbial growths were measured as absorbance up to 180 and 300min for apramycin and vancomycin turbidimetric bioassays, respectively. Conventional dose-response curves (absorbances or area under the microbial growth curve vs. log of antibiotic concentration) showed significant regression, however there were significant deviation of linearity. Thus, they could not be used for relative potency estimations. PLS regression allowed us to construct a predictive model for estimating the relative potencies of apramycin and vancomycin without over-fitting and it improved the linear range of turbidimetric bioassay. In addition, PLS regression provided predictions of relative potencies equivalent to those obtained from agar diffusion official methods. Therefore, we conclude that PLS regression may be used to estimate the relative potencies of antibiotics with significant advantages when compared to conventional dose-response curve determination. PMID:26971814

  7. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data.

    PubMed

    Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P A; Schmid, Adrien W

    2016-03-31

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. PMID:26965325

  8. Temporal parameter change of human postural control ability during upright swing using recursive least square method

    NASA Astrophysics Data System (ADS)

    Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi

    2009-12-01

    The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.

  9. Temporal parameter change of human postural control ability during upright swing using recursive least square method

    NASA Astrophysics Data System (ADS)

    Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi

    2010-01-01

    The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.

  10. Comparison of structural and least-squares lines for estimating geologic relations

    USGS Publications Warehouse

    Williams, G.P.; Troutman, B.M.

    1990-01-01

    Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.

  11. Numerical Solution of Poroelastic Wave Equation Using Nodal Discontinuous Galerkin Finite Element Method

    NASA Astrophysics Data System (ADS)

    Shukla, K.; Wang, Y.; Jaiswal, P.

    2014-12-01

    In a porous medium the seismic energy not only propagates through matrix but also through pore-fluids. The differential movement between sediment grains of the matrix and interstitial fluid generates a diffusive wave which is commonly referred to as the slow P-wave. A combined system of equation which includes both elastic and diffusive phases is known as the poroelasticity. Analyzing seismic data through poroelastic modeling results in accurate interpretation of amplitude and separation of wave modes, leading to more accurate estimation of geomehanical properties of rocks. Despite its obvious multi-scale application, from sedimentary reservoir characterization to deep-earth fractured crust, poroelasticity remains under-developed primarily due to the complex nature of its constituent equations. We present a detail formulation of poroleastic wave equations for isotropic media by combining the Biot's and Newtonian mechanics. System of poroelastic wave equation constitutes for eight time dependent hyperbolic PDEs in 2D whereas in case of 3D number goes up to thirteen. Eigen decomposition of Jacobian of these systems confirms the presence of an additional slow-P wave phase with velocity lower than shear wave, posing stability issues on numerical scheme. To circumvent the issue, we derived a numerical scheme using nodal discontinuous Galerkin approach by adopting the triangular meshes in 2D which is extended to tetrahedral for 3D problems. In our nodal DG approach the basis function over a triangular element is interpolated using Legendre-Gauss-Lobatto (LGL) function leading to a more accurate local solutions than in the case of simple DG. We have tested the numerical scheme for poroelastic media in 1D and 2D case, and solution obtained for the systems offers high accuracy in results over other methods such as finite difference , finite volume and pseudo-spectral. The nodal nature of our approach makes it easy to convert the application into a multi-threaded algorithm

  12. Simplified Discontinuous Galerkin Methods for Systems of Conservation Laws with Convex Extension

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    1999-01-01

    Simplified forms of the space-time discontinuous Galerkin (DG) and discontinuous Galerkin least-squares (DGLS) finite element method are developed and analyzed. The new formulations exploit simplifying properties of entropy endowed conservation law systems while retaining the favorable energy properties associated with symmetric variable formulations.

  13. A suggested procedure for resolving an anomaly in least-squares data analysis known as ``Peelle`s Pertinent Puzzle`` and the general implications for nuclear data evaluation

    SciTech Connect

    Chiba, Satoshi |; Smith, D.L.

    1991-09-01

    Modern nuclear-data evaluation methodology is based largely on statistical inference, with the least-squares technique being chosen most often to generate best estimates for physical quantities and their uncertainties. It has been observed that those least-squares evaluations which employ covariance matrices based on absolute errors that are derived directly from the reported experimental data often tend to produce results which appear to be too low. This anomaly is discussed briefly in this report, and a procedure for resolving it is suggested. The method involves employing data uncertainties which are derived from errors expressed in percent. These percent errors are used, in conjunction with reasonable a priori estimates for the quantities to be evaluated, to derive the covariance matrices which are required for applications of the least-squares procedure. This approach appears to lead to more rational weighting of the experimental data and, thus, to more realistic evaluated results than are obtained when the errors are based on the actual data. The procedure is very straightforward when only one parameter must be estimated. However, for those evaluation exercises involving more than one parameter, this technique demands that a priori estimates be provided at the outset for all of the parameters in question. Then, the least-squares method is applied iteratively to produce a sequence of sets of estimated values which are anticipated to convergence toward a particular set of parameters which one then designates as the ``best`` evaluated results from the exercise. It is found that convergence usually occurs very rapidly when the a priori estimates approximate the final solution reasonably well.

  14. Prediction of CO concentrations based on a hybrid Partial Least Square and Support Vector Machine model

    NASA Astrophysics Data System (ADS)

    Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.

    2012-08-01

    Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.

  15. Comparison of Kriging and Moving Least Square Methods to Change the Geometry of Human Body Models.

    PubMed

    Jolivet, Erwan; Lafon, Yoann; Petit, Philippe; Beillas, Philippe

    2015-11-01

    Finite Element Human Body Models (HBM) have become powerful tools to study the response to impact. However, they are typically only developed for a limited number of sizes and ages. Various approaches driven by control points have been reported in the literature for the non-linear scaling of these HBM into models with different geometrical characteristics. The purpose of this study is to compare the performances of commonly used control points based interpolation methods in different usage scenarios. Performance metrics include the respect of target, the mesh quality and the runability. For this study, the Kriging and Moving Least square interpolation approaches were compared in three test cases. The first two cases correspond to changes of anthropometric dimensions of (1) a child model (from 6 to 1.5 years old) and (2) the GHBMC M50 model (Global Human Body Models Consortium, from 50th to 5th percentile female). For the third case, the GHBMC M50 ribcage was scaled to match the rib cage geometry derived from a CT-scan. In the first two test cases, all tested methods provided similar shapes with acceptable results in terms of time needed for the deformation (a few minutes at most), overall respect of the targets, element quality distribution and time step for explicit simulation. The personalization of rib cage proved to be much more challenging. None of the methods tested provided fully satisfactory results at the level of the rib trajectory and section. There were corrugated local deformations unless using a smooth regression through relaxation. Overall, the results highlight the importance of the target definition over the interpolation method. PMID:26660750

  16. Statistical CT noise reduction with multiscale decomposition and penalized weighted least squares in the projection domain

    SciTech Connect

    Tang Shaojie; Tang Xiangyang

    2012-09-15

    Purposes: The suppression of noise in x-ray computed tomography (CT) imaging is of clinical relevance for diagnostic image quality and the potential for radiation dose saving. Toward this purpose, statistical noise reduction methods in either the image or projection domain have been proposed, which employ a multiscale decomposition to enhance the performance of noise suppression while maintaining image sharpness. Recognizing the advantages of noise suppression in the projection domain, the authors propose a projection domain multiscale penalized weighted least squares (PWLS) method, in which the angular sampling rate is explicitly taken into consideration to account for the possible variation of interview sampling rate in advanced clinical or preclinical applications. Methods: The projection domain multiscale PWLS method is derived by converting an isotropic diffusion partial differential equation in the image domain into the projection domain, wherein a multiscale decomposition is carried out. With adoption of the Markov random field or soft thresholding objective function, the projection domain multiscale PWLS method deals with noise at each scale. To compensate for the degradation in image sharpness caused by the projection domain multiscale PWLS method, an edge enhancement is carried out following the noise reduction. The performance of the proposed method is experimentally evaluated and verified using the projection data simulated by computer and acquired by a CT scanner. Results: The preliminary results show that the proposed projection domain multiscale PWLS method outperforms the projection domain single-scale PWLS method and the image domain multiscale anisotropic diffusion method in noise reduction. In addition, the proposed method can preserve image sharpness very well while the occurrence of 'salt-and-pepper' noise and mosaic artifacts can be avoided. Conclusions: Since the interview sampling rate is taken into account in the projection domain

  17. Multi-Processing Least Squares Collocation Applications to Gravity Field Analysis.

    NASA Astrophysics Data System (ADS)

    Kaas, Eigil; Sørensen, Brian; Tscherning, Carl Christian; Veicherts, Martin

    2013-04-01

    Least Squares Collocation (LSC) is used for the modeling of the gravity field, including predictions and error estimations of various quantities. The method requires that as many unknowns as number of data and parameters are solved for. Cholesky reduction must be used in a non-standard form due to missing positive-definiteness of the equation system. Furthermore the error estimation produces a rectangular or triangular matrix which must be Cholesky reduced in the non-standard manner. LSC have the possibility to add new sets of data without reprocessing earlier reduced parts of the equation system. Due to these factors standard Cholesky reduction programs using multi-processing cannot easily be applied. We have therefore implemented the use of Fortran Open Multi-Processing (OpenMP) and Message Passing Interface (MPI) to the non-standard Cholesky reduction. In the computation of matrix elements (covariances) as well as the evaluation spherical harmonic series used in the remove/restore setting we also take advantage of multi-processing. We describe the implementation using quadratic blocks, which aids in reducing the data transport overhead. Timing results for different block sizes and number of equations is presented. Both OpenMP and MPI scales favorably so that e.g. the prediction and error estimation of grids from GOCE TRF-data and ground gravity data can be done in the less than two hours for a 25deg by 25deg area with data selected close to 0.125 degree nodes. The results are obtained using a Dual Processor Intel(R) Xeon(R) CPU at 2.40GHz with a total of 24 threads.

  18. Semiclassical calculations of tunneling using interpolating moving least-squares potentials

    NASA Astrophysics Data System (ADS)

    Pham, Phong

    The interpolating moving least-squares (IMLS) and Local-IMLS methods are incorporated into semiclassical trajectory simulation. Issues related to the implementation are investigated. Potential energy surface (PES) constructed by the IMLS/L-IMLS methods is used to study tunneling in polyatomic systems HONO and malonaldehyde, where direct dynamics becomes prohibitively expensive at high ab initio levels. To study cis--trans isomerization in HONO, the PES is constructed by L-IMLS fitting at the MP4(SDQ)/6-31++G(d,p) level with the HDMR(5,3,3) basis set. Results obtained can be compared with the others in the literature. Semiclassical rates are close to the referenced quantum mechanical ones. The isomerization is governed by energy transfer into the reaction coordinate---the torsional mode; the rate is strongly mode-selective, and much faster for the cis--trans direction than for the opposite one. To study the ground-state splitting of malonaldehyde, the PES is first constructed by single-level L-IMLS fitting at the MP2/6-31G(d,p) level with the HDMR(3,2) basis set. The dual-level method is then employed for increasing accuracy of the PES and reducing computational cost using MP4/6-31G(d,p) as the high level method. Results obtained can be compared with the others in the literature. For 0.5 kcal/mol fitting tolerance the splitting is 38.7 and 8.8 cm-1 at MP2 single-level, and 29.6 and 5.5 cm-1 at MP4 dual-level for H9 and D5D9 isotopomers respectively, compared to the experiment of 21.6 and 2.884 cm-1 . Splitting is within two times of the experiment and agrees with other quantum mechanical and semiclassical studies.

  19. Identifying grey matter changes in schizotypy using partial least squares correlation.

    PubMed

    Wiebels, Kristina; Waldie, Karen E; Roberts, Reece P; Park, Haeme R P

    2016-08-01

    Neuroimaging research into the brain structure of schizophrenia patients has shown consistent reductions in grey matter volume relative to healthy controls. Examining structural differences in individuals with high levels of schizotypy may help elucidate the course of disorder progression, and provide further support for the schizotypy-schizophrenia continuum. Thus far, the few studies investigating grey matter differences in schizotypy have produced inconsistent results. In the current study, we used a multivariate partial least squares (PLS) approach to clarify the relationship between psychometric schizotypy (measured by the Oxford-Liverpool Inventory of Feelings and Experiences) and grey matter volume in 49 healthy adults. We found a negative association between all schizotypy dimensions and grey matter volume in the frontal and temporal lobes, as well as the insula. We also found a positive association between all schizotypy dimensions and grey matter volume in the parietal and temporal lobes, and in subcortical regions. Further correlational analyses revealed that positive and disorganised schizotypy were strongly associated with key regions (left superior temporal gyrus and insula) most consistently reported to be affected in schizophrenia and schizotypy. We also compared PLS with the typically used General Linear Model (GLM) and demonstrate that PLS can be reliably used as an extension to voxel-based morphometry (VBM) data. This may be particularly valuable for schizotypy research due to PLS' ability to detect small, but reliable effects. Together, the findings indicate that healthy schizotypal individuals exhibit structural changes in regions associated with schizophrenia. This adds to the evidence of an overlap of phenotypic expression between schizotypy and schizophrenia, and may help establish biological endophenotypes for the disorder. PMID:27208815

  20. Multimodal Classification of Mild Cognitive Impairment Based on Partial Least Squares.

    PubMed

    Wang, Pingyue; Chen, Kewei; Yao, Li; Hu, Bin; Wu, Xia; Zhang, Jiacai; Ye, Qing; Guo, Xiaojuan

    2016-08-10

    In recent years, increasing attention has been given to the identification of the conversion of mild cognitive impairment (MCI) to Alzheimer's disease (AD). Brain neuroimaging techniques have been widely used to support the classification or prediction of MCI. The present study combined magnetic resonance imaging (MRI), 18F-fluorodeoxyglucose PET (FDG-PET), and 18F-florbetapir PET (florbetapir-PET) to discriminate MCI converters (MCI-c, individuals with MCI who convert to AD) from MCI non-converters (MCI-nc, individuals with MCI who have not converted to AD in the follow-up period) based on the partial least squares (PLS) method. Two types of PLS models (informed PLS and agnostic PLS) were built based on 64 MCI-c and 65 MCI-nc from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The results showed that the three-modality informed PLS model achieved better classification accuracy of 81.40%, sensitivity of 79.69%, and specificity of 83.08% compared with the single-modality model, and the three-modality agnostic PLS model also achieved better classification compared with the two-modality model. Moreover, combining the three modalities with clinical test score (ADAS-cog), the agnostic PLS model (independent data: florbetapir-PET; dependent data: FDG-PET and MRI) achieved optimal accuracy of 86.05%, sensitivity of 81.25%, and specificity of 90.77%. In addition, the comparison of PLS, support vector machine (SVM), and random forest (RF) showed greater diagnostic power of PLS. These results suggested that our multimodal PLS model has the potential to discriminate MCI-c from the MCI-nc and may therefore be helpful in the early diagnosis of AD. PMID:27567818

  1. Accuracy Improvement by the Least Squares Image Matching Evaluated on the CARTOSAT-1

    NASA Astrophysics Data System (ADS)

    Afsharnia, H.; Azizi, A.; Arefi, H.

    2015-12-01

    Generating accurate elevation data from satellite images is a prerequisite step for applications that involve disaster forecasting and management using GIS platforms. In this respect, the high resolution satellite optical sensors may be regarded as one of the prime and valuable sources for generating accurate and updated elevation information. However, one of the main drawbacks of conventional approaches for automatic elevation generation from these satellite optical data using image matching techniques is the lack of flexibility in the image matching functional models to take dynamically into account the geometric and radiometric dissimilarities between the homologue stereo image points. The classical least squares image matching (LSM) method, on the other hand, is quite flexible in incorporating the geometric and radiometric variations of image pairs into its functional model. The main objective of this paper is to evaluate and compare the potential of the LSM technique for generating disparity maps from high resolution satellite images to achieve sub pixel precision. To evaluate the rate of success of the LSM, the size of the y-disparities between the homologous points is taken as the precision criteria. The evaluation is performed on the Cartosat-1 stereo along track images over a highly mountainous terrain. The precision improvement is judged based on the standard deviation and the scatter pattern of the y-disparity data. The analysis of the results indicate that, the LSM has achieved the matching precision of about 0.18 pixels which is clearly superior to the manual pointing that yielded the precision of 0.37 pixels.

  2. Blending moving least squares techniques with NURBS basis functions for nonlinear isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Cardoso, Rui P. R.; Cesar de Sa, J. M. A.

    2014-06-01

    IsoGeometric Analysis (IGA) is increasing its popularity as a new numerical tool for the analysis of structures. IGA provides: (i) the possibility of using higher order polynomials for the basis functions; (ii) the smoothness for contact analysis; (iii) the possibility to operate directly on CAD geometry. The major drawback of IGA is the non-interpolatory characteristic of the basis functions, which adds a difficulty in handling essential boundary conditions. Nevertheless, IGA suffers from the same problems depicted by other methods when it comes to reproduce isochoric and transverse shear strain deformations, especially for low order basis functions. In this work, projection techniques based on the moving least square (MLS) approximations are used to alleviate both the volumetric and the transverse shear lockings in IGA. The main objective is to project the isochoric and transverse shear deformations from lower order subspaces by using the MLS, alleviating in this way the volumetric and the transverse shear locking on the fully-integrated space. Because in IGA different degrees in the approximation functions can be used, different Gauss integration rules can also be employed, making the procedures for locking treatment in IGA very dependent on the degree of the approximation functions used. The blending of MLS with Non-Uniform Rational B-Splines (NURBS) basis functions is a methodology to overcome different locking pathologies in IGA which can be also used for enrichment procedures. Numerical examples for three-dimensional NURBS with only translational degrees of freedom are presented for both shell-type and plane strain structures.

  3. Detection of epileptic seizure in EEG signals using linear least squares preprocessing.

    PubMed

    Roshan Zamir, Z

    2016-09-01

    An epileptic seizure is a transient event of abnormal excessive neuronal discharge in the brain. This unwanted event can be obstructed by detection of electrical changes in the brain that happen before the seizure takes place. The automatic detection of seizures is necessary since the visual screening of EEG recordings is a time consuming task and requires experts to improve the diagnosis. Much of the prior research in detection of seizures has been developed based on artificial neural network, genetic programming, and wavelet transforms. Although the highest achieved accuracy for classification is 100%, there are drawbacks, such as the existence of unbalanced datasets and the lack of investigations in performances consistency. To address these, four linear least squares-based preprocessing models are proposed to extract key features of an EEG signal in order to detect seizures. The first two models are newly developed. The original signal (EEG) is approximated by a sinusoidal curve. Its amplitude is formed by a polynomial function and compared with the predeveloped spline function. Different statistical measures, namely classification accuracy, true positive and negative rates, false positive and negative rates and precision, are utilised to assess the performance of the proposed models. These metrics are derived from confusion matrices obtained from classifiers. Different classifiers are used over the original dataset and the set of extracted features. The proposed models significantly reduce the dimension of the classification problem and the computational time while the classification accuracy is improved in most cases. The first and third models are promising feature extraction methods with the classification accuracy of 100%. Logistic, LazyIB1, LazyIB5, and J48 are the best classifiers. Their true positive and negative rates are 1 while false positive and negative rates are 0 and the corresponding precision values are 1. Numerical results suggest that these

  4. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares.

    PubMed

    Boccard, Julien; Rudaz, Serge

    2016-05-12

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. PMID:27114219

  5. Using a partial least squares (PLS) method for estimating cyanobacterial pigments in eutrophic inland waters

    NASA Astrophysics Data System (ADS)

    Robertson, A. L.; Li, L.; Tedesco, L.; Wilson, J.; Soyeux, E.

    2009-08-01

    Midwestern lakes and reservoirs are commonly exposed to anthropogenic eutrophication. Cyanobacteria thrive in these nutrient rich-waters and some species pose three threats: 1) taste & odor (drinking), 2) toxins (drinking + recreational) and 3) water treatment process disturbance. Managers for drinking water production are interested in the rapid identification of cyanobacterial blooms to minimize effects caused by harmful cyanobacteria. There is potential to monitor cyanobacteria through the remote sensing of two algal pigments: chlorophyll a (CHL) and phycocyanin (PC). Several empirical methods that develop spectral parameters (e.g., simple band ratio) sensitive to these two pigments and map reflectance to the pigment concentration have been used in a number of investigations using field-based spectroradiometers. This study tests a multivariate analysis approach, partial least squares (PLS) regression, for the estimation of CHL and PC. PLS models were trained with 35 spectra collected from three central Indiana reservoirs during a 2007 field campaign with dual-headed Ocean Optics USB4000 field spectroradiometers (355 - 802 nm, nominal 1.0 nm intervals), and CHL and PC concentrations of the corresponding water samples analyzed at Indiana University-Purdue University at Indianapolis. Validation of these models with 19 remaining spectra show that PLS (CHL R2=0.90, slope=0.91, RMSE=20.61 μg/L PC R2=0.65, slope=1.15, RMSE=23.04. μg/L) performed equally well to the band tuning model based on Gitelson et al. 2005 (CHL: R2=0.75, slope=0.84, RMSE=40.16 μg/L PC: R2=0.59, slope=1.14, RMSE=20.24 μg/L).

  6. [Compensation-fitting extraction of dynamic spectrum based on least squares method].

    PubMed

    Lin, Ling; Wu, Ruo-Nan; Li, Yong-Cheng; Zhou, Mei; Li, Gang

    2014-07-01

    Extraction method for dynamic spectrum (DS) with a high signal to noise ratio is a key to achieving high-precision noninvasive detection of blood components. In order to further improve the accuracy and speed of DS extraction, linear similarity between photoelectric plethysmographys (PPG) at each two different wavelengths was analyzed in principle, and an experimental verification was conducted. Based on this property, the method of compensation-fitting extraction was proposed. Firstly, the baseline of PPG at each wavelength is estimated and compensated using single-period sampling average, which would remove the effect of baseline drift caused by motion artifact. Secondly, the slope of least squares fitting between each single-wavelength PPG and full-wavelength averaged PPG is acquired to construct DS, which would significantly suppress random noise. Contrast experiments were conducted on 25 samples in NIR wave band and Vis wave band respectively. Flatness and processing time of DS using compensation-fitting extraction were compared with that using single-trial estimation. In NIR band, the average variance using compensation-fitting estimation was 69.0% of that using single-trial estimation, and in Vis band it was 57.4%, which shows that the flatness of DS is steadily improved. In NIR band, the data processing time using compensation-fitting extraction could be reduced to 10% of that using single-trial estimation, and in Vis band it was 20%, which shows that the time for data processing is significantly reduced. Experimental results show that, compared with single-trial estimation method, dynamic spectrum compensation-fitting extraction could steadily improve the signal to noise ratio of DS, significantly improve estimation quality, reduce data processing time, and simplify procedure. Therefore, this new method is expected to promote the development of noninvasive blood components measurement. PMID:25269319

  7. A variant of sparse partial least squares for variable selection and data exploration

    PubMed Central

    Olson Hunt, Megan J.; Weissfeld, Lisa; Boudreau, Robert M.; Aizenstein, Howard; Newman, Anne B.; Simonsick, Eleanor M.; Van Domelen, Dane R.; Thomas, Fridtjof; Yaffe, Kristine; Rosano, Caterina

    2014-01-01

    When data are sparse and/or predictors multicollinear, current implementation of sparse partial least squares (SPLS) does not give estimates for non-selected predictors nor provide a measure of inference. In response, an approach termed “all-possible” SPLS is proposed, which fits a SPLS model for all tuning parameter values across a set grid. Noted is the percentage of time a given predictor is chosen, as well as the average non-zero parameter estimate. Using a “large” number of multicollinear predictors, simulation confirmed variables not associated with the outcome were least likely to be chosen as sparsity increased across the grid of tuning parameters, while the opposite was true for those strongly associated. Lastly, variables with a weak association were chosen more often than those with no association, but less often than those with a strong relationship to the outcome. Similarly, predictors most strongly related to the outcome had the largest average parameter estimate magnitude, followed by those with a weak relationship, followed by those with no relationship. Across two independent studies regarding the relationship between volumetric MRI measures and a cognitive test score, this method confirmed a priori hypotheses about which brain regions would be selected most often and have the largest average parameter estimates. In conclusion, the percentage of time a predictor is chosen is a useful measure for ordering the strength of the relationship between the independent and dependent variables, serving as a form of inference. The average parameter estimates give further insight regarding the direction and strength of association. As a result, all-possible SPLS gives more information than the dichotomous output of traditional SPLS, making it useful when undertaking data exploration and hypothesis generation for a large number of potential predictors. PMID:24624079

  8. Spatial-temporal surface deformation of Los Angeles over 2003-2007 from weighted least squares DInSAR

    NASA Astrophysics Data System (ADS)

    Hu, Jun; Li, Zhiwei; Ding, Xiaoli; Zhu, Jianjun; Sun, Qian

    2013-04-01

    The spatial-temporal evolution of surface displacement of Los Angeles area over 2003-2007 is measured with weighted least squares (WLS) small baseline (SB) DInSAR technique. 32 small baseline interferograms are generated from 18 SAR images acquired by the ENVISAT satellite and separated into two independent subsets. An additional interferogram with a longer baseline but good interferometric quality is used to link the two subsets. A time series of displacements with their corresponding standard deviations (STD) are derived from the WLS DInSAR solution by considering the interferometric displacement variances when determining the weighting scheme. Both the long-term trends and the seasonal variations of the displacements in the area are determined in the study and validated with GPS measurements from a number of stations of the Southern California Integrated GPS Network (SCIGN). The mean line-of-sight (LOS) displacement velocity map shows up to 3 cm/year of ground motion and up to 10 cm of accumulated displacements in Santa Fe Springs and -8 cm in Pomona over 2003-2007. Seasonal variations are identified in Santa Ana basin, the San Gabriel valley and the Lytle Creek basin, respectively.

  9. WLS-ENO: Weighted-least-squares based essentially non-oscillatory schemes for finite volume methods on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Liu, Hongxu; Jiao, Xiangmin

    2016-06-01

    ENO (Essentially Non-Oscillatory) and WENO (Weighted Essentially Non-Oscillatory) schemes are widely used high-order schemes for solving partial differential equations (PDEs), especially hyperbolic conservation laws with piecewise smooth solutions. For structured meshes, these techniques can achieve high order accuracy for smooth functions while being non-oscillatory near discontinuities. For unstructured meshes, which are needed for complex geometries, similar schemes are required but they are much more challenging. We propose a new family of non-oscillatory schemes, called WLS-ENO, in the context of solving hyperbolic conservation laws using finite-volume methods over unstructured meshes. WLS-ENO is derived based on Taylor series expansion and solved using a weighted least squares formulation. Unlike other non-oscillatory schemes, the WLS-ENO does not require constructing sub-stencils, and hence it provides a more flexible framework and is less sensitive to mesh quality. We present rigorous analysis of the accuracy and stability of WLS-ENO, and present numerical results in 1-D, 2-D, and 3-D for a number of benchmark problems, and also report some comparisons against WENO.

  10. Kinetic spectrophotometric determination of hydrocortisone acetate in a pharmaceutical preparation by use of partial least-squares regression.

    PubMed

    Blanco, M; Coello, J; Iturriaga, H; Maspoch, S; Villegas, N

    1999-06-01

    A kinetic spectrophotometric method for the determination of hydrocortisone acetate based on its condensation with isonicotinic acid hydrazide is proposed. The method is applied to the determination of hydrocortisone acetate in a commercially available pharmaceutical preparation, presented as a pomade, that also contains another corticosteroid and additional active compounds. The operating procedure involves dissolving the pomade in chloroform and the addition of the reagent solution directly to the cuvette, in this way avoiding the previous extraction of analytes from the insoluble pomade matrix required by the alternative HPLC procedure. Calibration is performed by partial least-squares regression, using absorbance or first derivative spectra values recorded each minute during the first 30 min of reaction. Use of first derivative spectra overcomes possible scattered light problems produced by excipients precipitating, and produced slightly better results than absorbance data. The relative standard deviation obtained for 11 replicates analysed on different days was approx. 1.5%. The proposed method improves both accuracy and precision of the classical initial rate method and the precision of the HPLC procedure. PMID:10736875

  11. Solution of Reynolds-averaged Navier-Stokes equations by discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Kang, Sungwoo; Yoo, Jung Yul

    2007-11-01

    Discontinuous Galerkin method is a finite element method that allows discontinuities at inter-element boundaries. The discontinuities in the method are treated by approximate Riemann solvers. One important feature of the method is that it obtains high-order accuracy for unstructured mesh with no difficulty. Due to this feature, it can be useful for various practical applications to turbulence and aeroacoustics, but there are few problems to be solved before the method is applicable to practical flow problems. Due to discontinuous approximations in discontinuous Galerkin method, the treatments of viscous terms are complicated and expensive. Moreover, careful treatments of source terms in turbulence model equations are necessary for Reynolds-averaged Navier-Stokes equations to prevent blow-up of high-order-accurate simulations. In this study, we compare high-order accurate discontinuous Galerkin method with different viscous treatments and stabilization of source terms for compressible Reynold-averaged Navier-Stokes equations. Spalart-Allmaras or k-φ model is used for turbulence model. To compare the implemented formulations, steady turbulent flow over a flat plate and unsteady turbulent flow over cavity are solved.

  12. Element free Galerkin formulation of composite beam with longitudinal slip

    SciTech Connect

    Ahmad, Dzulkarnain; Mokhtaram, Mokhtazul Haizad; Badli, Mohd Iqbal; Yassin, Airil Y. Mohd

    2015-05-15

    Behaviour between two materials in composite beam is assumed partially interact when longitudinal slip at its interfacial surfaces is considered. Commonly analysed by the mesh-based formulation, this study used meshless formulation known as Element Free Galerkin (EFG) method in the beam partial interaction analysis, numerically. As meshless formulation implies that the problem domain is discretised only by nodes, the EFG method is based on Moving Least Square (MLS) approach for shape functions formulation with its weak form is developed using variational method. The essential boundary conditions are enforced by Langrange multipliers. The proposed EFG formulation gives comparable results, after been verified by analytical solution, thus signify its application in partial interaction problems. Based on numerical test results, the Cubic Spline and Quartic Spline weight functions yield better accuracy for the EFG formulation, compares to other proposed weight functions.

  13. A hybridizable discontinuous Galerkin method combined to a Schwarz algorithm for the solution of 3d time-harmonic Maxwell's equation

    NASA Astrophysics Data System (ADS)

    Li, Liang; Lanteri, Stéphane; Perrussel, Ronan

    2014-01-01

    A Schwarz-type domain decomposition method is presented for the solution of the system of 3d time-harmonic Maxwell's equations. We introduce a hybridizable discontinuous Galerkin (HDG) scheme for the discretization of the problem based on a tetrahedrization of the computational domain. The discrete system of the HDG method on each subdomain is solved by an optimized sparse direct (LU factorization) solver. The solution of the interface system in the domain decomposition framework is accelerated by a Krylov subspace method. The formulation and the implementation of the resulting DD-HDG (Domain Decomposed-Hybridizable Discontinuous Galerkin) method are detailed. Numerical results show that the resulting DD-HDG solution strategy has an optimal convergence rate and can save both CPU time and memory cost compared to a classical upwind flux-based DD-DG (Domain Decomposed-Discontinuous Galerkin) approach.

  14. Neutron spectrum unfolding using artificial neural network and modified least square method

    NASA Astrophysics Data System (ADS)

    Hosseini, Seyed Abolfazl

    2016-09-01

    In the present paper, neutron spectrum is reconstructed using the Artificial Neural Network (ANN) and Modified Least Square (MLSQR) methods. The detector's response (pulse height distribution) as a required data for unfolding of energy spectrum is calculated using the developed MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). Unlike the usual methods that apply inversion procedures to unfold the energy spectrum from the Fredholm integral equation, the MLSQR method uses the direct procedure. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry of neutron sources, the neutron pulse height distribution is simulated/measured in the NE-213 detector. The response matrix is calculated using the MCNPX-ESUT computational code through the simulation of NE-213 detector's response to monoenergetic neutron sources. For known neutron pulse height distribution, the energy spectrum of the neutron source is unfolded using the MLSQR method. In the developed multilayer perception neural network for reconstruction of the energy spectrum of the neutron source, there is no need for formation of the response matrix. The multilayer perception neural network is developed based on logsig, tansig and purelin transfer functions. The developed artificial neural network consists of two hidden layers of type hyperbolic tangent sigmoid transfer function and a linear transfer function in the output layer. The motivation of applying the ANN method may be explained by the fact that no matrix inversion is needed for energy spectrum unfolding. The simulated neutron pulse height distributions in each light bin due to randomly generated neutron spectrum are considered as the input data of ANN. Also, the randomly generated energy spectra are considered as the output data of the ANN. Energy spectrum of the neutron source is identified with high accuracy using both MLSQR and ANN methods. The results obtained from

  15. Kernelized partial least squares for feature reduction and classification of gene microarray data

    PubMed Central

    2011-01-01

    Background The primary objectives of this paper are: 1.) to apply Statistical Learning Theory (SLT), specifically Partial Least Squares (PLS) and Kernelized PLS (K-PLS), to the universal "feature-rich/case-poor" (also known as "large p small n", or "high-dimension, low-sample size") microarray problem by eliminating those features (or probes) that do not contribute to the "best" chromosome bio-markers for lung cancer, and 2.) quantitatively measure and verify (by an independent means) the efficacy of this PLS process. A secondary objective is to integrate these significant improvements in diagnostic and prognostic biomedical applications into the clinical research arena. That is, to devise a framework for converting SLT results into direct, useful clinical information for patient care or pharmaceutical research. We, therefore, propose and preliminarily evaluate, a process whereby PLS, K-PLS, and Support Vector Machines (SVM) may be integrated with the accepted and well understood traditional biostatistical "gold standard", Cox Proportional Hazard model and Kaplan-Meier survival analysis methods. Specifically, this new combination will be illustrated with both PLS and Kaplan-Meier followed by PLS and Cox Hazard Ratios (CHR) and can be easily extended for both the K-PLS and SVM paradigms. Finally, these previously described processes are contained in the Fine Feature Selection (FFS) component of our overall feature reduction/evaluation process, which consists of the following components: 1.) coarse feature reduction, 2.) fine feature selection and 3.) classification (as described in this paper) and prediction. Results Our results for PLS and K-PLS showed that these techniques, as part of our overall feature reduction process, performed well on noisy microarray data. The best performance was a good 0.794 Area Under a Receiver Operating Characteristic (ROC) Curve (AUC) for classification of recurrence prior to or after 36 months and a strong 0.869 AUC for

  16. Multiparameter linear least-squares fitting to Poisson data one count at a time

    NASA Technical Reports Server (NTRS)

    Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.

    1995-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number n(sub i) of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts w(sub i) in the weighted LLSQ method when square root of n(sub i) instead of square root of bar-n(sub i) is used to approximate the uncertainties, sigma(sub i), in the data, where bar-n(sub i) = E(n(sub i)), the expected value of N(sub i). We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 ke

  17. Structure borne noise analysis using Helmholtz equation least squares based forced vibro acoustic components

    NASA Astrophysics Data System (ADS)

    Natarajan, Logesh Kumar

    This dissertation presents a structure-borne noise analysis technology that is focused on providing a cost-effective noise reduction strategy. Structure-borne sound is generated or transmitted through structural vibration; however, only a small portion of the vibration can effectively produce sound and radiate it to the far-field. Therefore, cost-effective noise reduction is reliant on identifying and suppressing the critical vibration components that are directly responsible for an undesired sound. However, current technologies cannot successfully identify these critical vibration components from the point of view of direct contribution to sound radiation and hence cannot guarantee the best cost-effective noise reduction. The technology developed here provides a strategy towards identifying the critical vibration components and methodically suppressing them to achieve a cost-effective noise reduction. The core of this technology is Helmholtz equation least squares (HELS) based nearfield acoustic holography method. In this study, the HELS formulations derived in spherical co-ordinates using spherical wave expansion functions utilize the input data of acoustic pressures measured in the nearfield of a vibrating object to reconstruct the vibro-acoustic responses on the source surface and acoustic quantities in the far field. Using these formulations, three steps were taken to achieve the goal. First, hybrid regularization techniques were developed to improve the reconstruction accuracy of normal surface velocity of the original HELS method. Second, correlations between the surface vibro-acoustic responses and acoustic radiation were factorized using singular value decomposition to obtain orthogonal basis known here as the forced vibro-acoustic components (F-VACs). The F-VACs enables one to identify the critical vibration components for sound radiation in a similar manner that modal decomposition identifies the critical natural modes in a structural vibration. Finally

  18. Seasonal prediction of the East Asian summer monsoon with a partial-least square model

    NASA Astrophysics Data System (ADS)

    Wu, Zhiwei; Yu, Lulu

    2016-05-01

    Seasonal prediction of the East Asian summer monsoon (EASM) strength is probably one of the most challenging and crucial issues for climate prediction over East Asia. In this paper, a statistical method called partial-least square (PLS) regression is utilized to uncover principal sea surface temperature (SST) modes in the winter preceding the EASM. Results show that the SST pattern of the first PLS mode is associated with the turnabout of warming (or cooling) phase of a mega-El Niño/Southern Oscillation (mega-ENSO) (a leading mode of interannual-to-interdecadal variations of global SST), whereas that of the second PLS mode leads the warming/cooling mega-ENSO by about 1 year, signaling precursory conditions for mega-ENSO. These indicate that mega-ENSO may provide a critical predictability source for the EASM strength. Based on a 40-year training period (1958-1997), a PLS prediction model is constructed using the two leading PLS modes and 3-month-lead hindcasts are performed for the validation period of 1998-2013. A promising skill is obtained, which is comparable to the ensemble mean of versions 3 and 4 of the Canadian Community Atmosphere Model (CanCM3/4) hindcasts from the newly developed North American Multi-model Ensemble Prediction System regarding the interannual variations of the EASM strength. How to improve dynamical model simulation of the EASM is also examined through comparing the CanCM3/4 hindcast (1982-2010) with the 106-year historical run (1900-2005) by the Second Generation Canadian Earth System Model (CanESM2). CanCM3/4 exhibits a high skill in the EASM hindcast period 1982-2010 during which it also has a better performance in capturing the relationship between the EASM and mega-ENSO. By contrast, the simulation skill of CanESM2 is quite low and it is unable to reproduce the linkage between the EASM and mega-ENSO. All these results emphasize importance of mega-ENSO in seasonal prediction and dynamical model simulation of the EASM.

  19. First-order system least-squares for second-order elliptic problems with discontinuous coefficients: Further results

    SciTech Connect

    Bloechle, B.; Manteuffel, T.; McCormick, S.; Starke, G.

    1996-12-31

    Many physical phenomena are modeled as scalar second-order elliptic boundary value problems with discontinuous coefficients. The first-order system least-squares (FOSLS) methodology is an alternative to standard mixed finite element methods for such problems. The occurrence of singularities at interface corners and cross-points requires that care be taken when implementing the least-squares finite element method in the FOSLS context. We introduce two methods of handling the challenges resulting from singularities. The first method is based on a weighted least-squares functional and results in non-conforming finite elements. The second method is based on the use of singular basis functions and results in conforming finite elements. We also share numerical results comparing the two approaches.

  20. Monte Carlo solution for uncertainty propagation in particle transport with a stochastic Galerkin method

    SciTech Connect

    Franke, B. C.; Prinja, A. K.

    2013-07-01

    The stochastic Galerkin method (SGM) is an intrusive technique for propagating data uncertainty in physical models. The method reduces the random model to a system of coupled deterministic equations for the moments of stochastic spectral expansions of result quantities. We investigate solving these equations using the Monte Carlo technique. We compare the efficiency with brute-force Monte Carlo evaluation of uncertainty, the non-intrusive stochastic collocation method (SCM), and an intrusive Monte Carlo implementation of the stochastic collocation method. We also describe the stability limitations of our SGM implementation. (authors)

  1. Petrov-Galerkin's method hybrid with finite element into the Helmholtz equation solution. Part II

    NASA Astrophysics Data System (ADS)

    Rabadan Malda, Itzala; Salazar Cordero, Emigdio; Ortega Herrera, Jose Angel

    2002-11-01

    This work proposes a hybridization between Petrov-Galerkins numeric method and finite element method (FEM) to resolve Helmholtz equation when dominion is an open or semiopen tube-shaped configuration and with determinate number of holes over cylindrical surface. It's pretended to solve these kind of cavities, thereby it allows us to obtain very important design parameters like: cavity length, quantity, size and distance between toneholes, form and size of mouthpiece or outlet. These parameters are design basis into acoustic and musical instrumentation: baffles outlet pipes, diffusers, silencers, flutes, oboes, saxophones, trumpets, quenas, and many more. In this way it's expected to determine advantages of this numeric method above another using actually.

  2. A priori noise and regularization in least squares collocation of gravity anomalies

    NASA Astrophysics Data System (ADS)

    Jarmołowski, Wojciech

    2013-12-01

    The paper describes the estimation of covariance parameters in least squares collocation (LSC) by the cross-validation (CV) technique called leave-one-out (LOO). Two parameters of Gauss-Markov third order model (GM3) are estimated together with a priori noise standard deviation, which contributes significantly to the covariance matrix composed of the signal and noise. Numerical tests are performed using large set of Bouguer gravity anomalies located in the central part of the U.S. Around 103 000 gravity stations are available in the selected area. This dataset, together with regular grids generated from EGM2008 geopotential model, give an opportunity to work with various spatial resolutions of the data and heterogeneous variances of the signal and noise. This plays a crucial role in the numerical investigations, because the spatial resolution of the gravity data determines the number of gravity details that we may observe and model. This establishes a relation between the spatial resolution of the data and the resolution of the gravity field model. This relation is inspected in the article and compared to the regularization problem occurring frequently in data modeling. Artykuł opisuje estymację parametrów kowariancji w kolokacji najmniejszych kwadratów (LSC) przy pomocy techniki kroswalidacji nazywanej leave-one-out (LOO). Wyznaczane są dwa parametry modelu Gaussa-Markova trzeciego rzędu (GM3) wraz z odchyleniem standardowym szumu a priori, które ma znaczny wpływ na macierz kowariancji złożoną z sygnału i szumu. Testy numeryczne przeprowadzono na dużym zbiorze anomalii grawimetrycznych Bouguera z obszaru centralnej części USA. Obszar ten mieści około 103000 pomiarów grawimetrycznych. Dane te wraz z regularnymi siatkami wygenerowanymi z modelu geopotencjalnego EGM2008 pozwalają na pracę z różną rozdzielczością przestrzenną i różnymi wariancjami sygnału i szumu. Odgrywa to kluczową rolę w badaniach numerycznych, ponieważ rozdzielczo

  3. Denoising spectroscopic data by means of the improved least-squares deconvolution method

    NASA Astrophysics Data System (ADS)

    Tkachenko, A.; Van Reeth, T.; Tsymbal, V.; Aerts, C.; Kochukhov, O.; Debosscher, J.

    2013-12-01

    Context. The MOST, CoRoT, and Kepler space missions have led to the discovery of a large number of intriguing, and in some cases unique, objects among which are pulsating stars, stars hosting exoplanets, binaries, etc. Although the space missions have delivered photometric data of unprecedented quality, these data are lacking any spectral information and we are still in need of ground-based spectroscopic and/or multicolour photometric follow-up observations for a solid interpretation. Aims: The faintness of most of the observed stars and the required high signal-to-noise ratio (S/N) of spectroscopic data both imply the need to use large telescopes, access to which is limited. In this paper, we look for an alternative, and aim for the development of a technique that allows the denoising of the originally low S/N (typically, below 80) spectroscopic data, making observations of faint targets with small telescopes possible and effective. Methods: We present a generalization of the original least-squares deconvolution (LSD) method by implementing a multicomponent average profile and a line strengths correction algorithm. We tested the method on simulated and real spectra of single and binary stars, among which are two intrinsically variable objects. Results: The method was successfully tested on the high-resolution spectra of Vega and a Kepler star, KIC 04749989. Application to the two pulsating stars, 20 Cvn and HD 189631, showed that the technique is also applicable to intrinsically variable stars: the results of frequency analysis and mode identification from the LSD model spectra for both objects are in good agreement with the findings from literature. Depending on the S/N of the original data and spectral characteristics of a star, the gain in S/N in the LSD model spectrum typically ranges from 5 to 15 times. Conclusions: The technique introduced in this paper allows an effective denoising of the originally low S/N spectroscopic data. The high S/N spectra obtained

  4. A Generalized Least Squares Regression Approach for Computing Effect Sizes in Single-Case Research: Application Examples

    ERIC Educational Resources Information Center

    Maggin, Daniel M.; Swaminathan, Hariharan; Rogers, Helen J.; O'Keeffe, Breda V.; Sugai, George; Horner, Robert H.

    2011-01-01

    A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of…

  5. Theoretic Fit and Empirical Fit: The Performance of Maximum Likelihood versus Generalized Least Squares Estimation in Structural Equation Models.

    ERIC Educational Resources Information Center

    Olsson, Ulf Henning; Troye, Sigurd Villads; Howell, Roy D.

    1999-01-01

    Used simulation to compare the ability of maximum likelihood (ML) and generalized least-squares (GLS) estimation to provide theoretic fit in models that are parsimonious representations of a true model. The better empirical fit obtained for GLS, compared with ML, was obtained at the cost of lower theoretic fit. (Author/SLD)

  6. SIMULATIONS OF 2D AND 3D THERMOCAPILLARY FLOWS BY A LEAST-SQUARES FINITE ELEMENT METHOD. (R825200)

    EPA Science Inventory

    Numerical results for time-dependent 2D and 3D thermocapillary flows are presented in this work. The numerical algorithm is based on the Crank-Nicolson scheme for time integration, Newton's method for linearization, and a least-squares finite element method, together with a matri...

  7. A Comparison of Approaches for the Analysis of Interaction Effects between Latent Variables Using Partial Least Squares Path Modeling

    ERIC Educational Resources Information Center

    Henseler, Jorg; Chin, Wynne W.

    2010-01-01

    In social and business sciences, the importance of the analysis of interaction effects between manifest as well as latent variables steadily increases. Researchers using partial least squares (PLS) to analyze interaction effects between latent variables need an overview of the available approaches as well as their suitability. This article…

  8. Using the Criterion-Predictor Factor Model to Compute the Probability of Detecting Prediction Bias with Ordinary Least Squares Regression

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew

    2012-01-01

    The study of prediction bias is important and the last five decades include research studies that examined whether test scores differentially predict academic or employment performance. Previous studies used ordinary least squares (OLS) to assess whether groups differ in intercepts and slopes. This study shows that OLS yields inaccurate inferences…

  9. Alternating Least Squares Algorithms for Simultaneous Components Analysis with Equal Component Weight Matrices in Two or More Populations.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.; ten Berge, Jos M. F.

    1989-01-01

    Two alternating least squares algorithms are presented for the simultaneous components analysis method of R. E. Millsap and W. Meredith (1988). These methods, one for small data sets and one for large data sets, can indicate whether or not a global optimum for the problem has been attained. (SLD)

  10. Comparing Least Squares and Robust Methods in Linear Regression Analysis of the Discharge of the Flathead River, Northwestern Montana.

    NASA Astrophysics Data System (ADS)

    Bell, A. L.; Moore, J. N.; Greenwood, M. C.

    2007-12-01

    The Flathead River in Northwestern Montana drains the relatively pristine, high-mountain watersheds of Glacier- Waterton national parks and large wilderness areas making it an excellent test-bed for hydrologic response to climate change. Flows in the North Fork and Middle Fork of the Flathead River are relatively unmodified by humans, whereas the South Fork has a large hydroelectric reservoir (Hungry Horse) in the lower end of the basin. USGS stream gage data for the North, Middle and South forks from 1940 to 2006 were analyzed for significant trends in the timing of quantiles of flow to examine climate forcing vs. direct modification of flow from the dam. The trends in timing were analyzed for climate change influences using the PRISM model output for 1940 to 2006 for the respective basin. The analysis of trends in timing employed two linear regression methods, typical least squares estimation and robust estimation using weighted least squares. Least squares estimation is the standard method employed when performing regression analysis. The power of this method is sensitive to the violation of the assumptions of normally distributed errors with constant variance (homoscedasticity). Considering that violations of these assumptions are common in hydrologic data, robust estimation was used to preserve the desired statistical power because it is not significantly affected by non-normality or heteroscedasticity. Least squares estimated trends that were found to be significant, using a 10% significance level, were typically not significant using a robust estimation method. This could have implications for interpreting the meaning of significant trends found using the least squares estimator. Utilizing robust estimation methods for analyzing hydrologic data may allow investigators to more accurately summarize any trends.

  11. A feasibility study of a 3-D finite element solution scheme for aeroengine duct acoustics

    NASA Technical Reports Server (NTRS)

    Abrahamson, A. L.

    1980-01-01

    The advantage from development of a 3-D model of aeroengine duct acoustics is the ability to analyze axial and circumferential liner segmentation simultaneously. The feasibility of a 3-D duct acoustics model was investigated using Galerkin or least squares element formulations combined with Gaussian elimination, successive over-relaxation, or conjugate gradient solution algorithms on conventional scalar computers and on a vector machine. A least squares element formulation combined with a conjugate gradient solver on a CDC Star vector computer initially appeared to have great promise, but severe difficulties were encountered with matrix ill-conditioning. These difficulties in conditioning rendered this technique impractical for realistic problems.

  12. Exploring the limits of cryospectroscopy: Least-squares based approaches for analyzing the self-association of HCl

    NASA Astrophysics Data System (ADS)

    De Beuckeleer, Liene I.; Herrebout, Wouter A.

    2016-02-01

    To rationalize the concentration dependent behavior observed for a large spectral data set of HCl recorded in liquid argon, least-squares based numerical methods are developed and validated. In these methods, for each wavenumber a polynomial is used to mimic the relation between monomer concentrations and measured absorbances. Least-squares fitting of higher degree polynomials tends to overfit and thus leads to compensation effects where a contribution due to one species is compensated for by a negative contribution of another. The compensation effects are corrected for by carefully analyzing, using AIC and BIC information criteria, the differences observed between consecutive fittings when the degree of the polynomial model is systematically increased, and by introducing constraints prohibiting negative absorbances to occur for the monomer or for one of the oligomers. The method developed should allow other, more complicated self-associating systems to be analyzed with a much higher accuracy than before.

  13. Penalized Multi-Way Partial Least Squares for Smooth Trajectory Decoding from Electrocorticographic (ECoG) Recording

    PubMed Central

    Eliseyev, Andrey; Aksenova, Tetiana

    2016-01-01

    In the current paper the decoding algorithms for motor-related BCI systems for continuous upper limb trajectory prediction are considered. Two methods for the smooth prediction, namely Sobolev and Polynomial Penalized Multi-Way Partial Least Squares (PLS) regressions, are proposed. The methods are compared to the Multi-Way Partial Least Squares and Kalman Filter approaches. The comparison demonstrated that the proposed methods combined the prediction accuracy of the algorithms of the PLS family and trajectory smoothness of the Kalman Filter. In addition, the prediction delay is significantly lower for the proposed algorithms than for the Kalman Filter approach. The proposed methods could be applied in a wide range of applications beyond neuroscience. PMID:27196417

  14. Least Square Fast Learning Network for modeling the combustion efficiency of a 300WM coal-fired boiler.

    PubMed

    Li, Guoqiang; Niu, Peifeng; Wang, Huaibao; Liu, Yongchao

    2014-03-01

    This paper presents a novel artificial neural network with a very fast learning speed, all of whose weights and biases are determined by the twice Least Square method, so it is called Least Square Fast Learning Network (LSFLN). In addition, there is another difference from conventional neural networks, which is that the output neurons of LSFLN not only receive the information from the hidden layer neurons, but also receive the external information itself directly from the input neurons. In order to test the validity of LSFLN, it is applied to 6 classical regression applications, and also employed to build the functional relation between the combustion efficiency and operating parameters of a 300WM coal-fired boiler. Experimental results show that, compared with other methods, LSFLN with very less hidden neurons could achieve much better regression precision and generalization ability at a much faster learning speed. PMID:24373896

  15. Penalized Multi-Way Partial Least Squares for Smooth Trajectory Decoding from Electrocorticographic (ECoG) Recording.

    PubMed

    Eliseyev, Andrey; Aksenova, Tetiana

    2016-01-01

    In the current paper the decoding algorithms for motor-related BCI systems for continuous upper limb trajectory prediction are considered. Two methods for the smooth prediction, namely Sobolev and Polynomial Penalized Multi-Way Partial Least Squares (PLS) regressions, are proposed. The methods are compared to the Multi-Way Partial Least Squares and Kalman Filter approaches. The comparison demonstrated that the proposed methods combined the prediction accuracy of the algorithms of the PLS family and trajectory smoothness of the Kalman Filter. In addition, the prediction delay is significantly lower for the proposed algorithms than for the Kalman Filter approach. The proposed methods could be applied in a wide range of applications beyond neuroscience. PMID:27196417

  16. A distribution-free alternative to least-squares regression and its application to Rb/Sr isochron calculations

    USGS Publications Warehouse

    Vugrinovich, R.G.

    1981-01-01

    A distribution-free estimator of the slope of a regression line is introduced. This estimator is designated Sm and is given by the median of the set of n(n - 1)/2 slope estimators, which may be calculated by inserting pairs of points (Xi, Yi)and (Xj, Yj)into the slope formula Si = (Yi - Yj)/(Xi - Xj), 1 ??? i k (median {|Ri - Rm|}). If no outliers are found, the Y-intercept is given by Rm. Confidence limits on Rm and Sm can be found from the sets of Ri and Si, respectively. The distribution-free estimators are compared with the least-squares estimators now in use by utilizing published data. Differences between the least-squares and distribution-free estimates are discussed, as are the drawbacks of the distribution-free techniques. ?? 1981 Plenum Publishing Corporation.

  17. Bounds on least-squares four-parameter sine-fit errors due to harmonic distortion and noise

    SciTech Connect

    Deyst, J.P.; Souders, T.M.; Solomon, O.M.

    1994-03-01

    Least-squares sine-fit algorithms are used extensively in signal processing applications. The parameter estimates produced by such algorithms are subject to both random and systematic errors when the record of input samples consists of a fundamental sine wave corrupted by harmonic distortion or noise. The errors occur because, in general, such sine-fits will incorporate a portion of the harmonic distortion or noise into their estimate of the fundamental. Bounds are developed for these errors for least-squares four-parameter (amplitude, frequency, phase, and offset) sine-fit algorithms. The errors are functions of the number of periods in the record, the number of samples in the record, the harmonic order, and fundamental and harmonic amplitudes and phases. The bounds do not apply to cases in which harmonic components become aliased.

  18. A consensus least squares support vector regression (LS-SVR) for analysis of near-infrared spectra of plant samples.

    PubMed

    Li, Yankun; Shao, Xueguang; Cai, Wensheng

    2007-04-15

    Consensus modeling of combining the results of multiple independent models to produce a single prediction avoids the instability of single model. Based on the principle of consensus modeling, a consensus least squares support vector regression (LS-SVR) method for calibrating the near-infrared (NIR) spectra was proposed. In the proposed approach, NIR spectra of plant samples were firstly preprocessed using discrete wavelet transform (DWT) for filtering the spectral background and noise, then, consensus LS-SVR technique was used for building the calibration model. With an optimization of the parameters involved in the modeling, a satisfied model was achieved for predicting the content of reducing sugar in plant samples. The predicted results show that consensus LS-SVR model is more robust and reliable than the conventional partial least squares (PLS) and LS-SVR methods. PMID:19071605

  19. Linear regression models, least-squares problems, normal equations, and stopping criteria for the conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Arioli, M.; Gratton, S.

    2012-11-01

    Minimum-variance unbiased estimates for linear regression models can be obtained by solving least-squares problems. The conjugate gradient method can be successfully used in solving the symmetric and positive definite normal equations obtained from these least-squares problems. Taking into account the results of Golub and Meurant (1997, 2009) [10,11], Hestenes and Stiefel (1952) [17], and Strakoš and Tichý (2002) [16], which make it possible to approximate the energy norm of the error during the conjugate gradient iterative process, we adapt the stopping criterion introduced by Arioli (2005) [18] to the normal equations taking into account the statistical properties of the underpinning linear regression problem. Moreover, we show how the energy norm of the error is linked to the χ2-distribution and to the Fisher-Snedecor distribution. Finally, we present the results of several numerical tests that experimentally validate the effectiveness of our stopping criteria.

  20. High-order ENO schemes for unstructured meshes based on least-squares reconstruction

    SciTech Connect

    Ollivier-Gooch, C.F.

    1997-03-01

    High-order accurate schemes for conservation laws for unstructured meshes are not nearly so well advanced as such schemes for structured meshes. Consequently, little or nothing is known about the possible practical advantages of high-order discretization on unstructured meshes. This article is part of an ongoing effort to develop high-order schemes for unstructured meshes to the point where meaningful information can be obtained about the trade-offs involved in using spatial discretizations of higher than second-order accuracy on unstructured meshes. This article describes a high-order accurate ENO reconstruction scheme, called DD-L{sub 2}-ENO, for use with vertex-centered upwind flow solution algorithms on unstructured meshes. The solution of conservation equations in this context can be broken naturally into three phases: (1) solution reconstruction, in which a polynomial approximation of the solution is obtained in each control volume. (2) Flux integration around each control volume, using an appropriate flux function and a quadrature rule with accuracy commensurate with that of the reconstruction. (3) Time evolution, which may be implicit, explicit, multigrid, or some hybrid.