Science.gov

Sample records for galerkin least-squares solutions

  1. Meshless Galerkin least-squares method

    NASA Astrophysics Data System (ADS)

    Pan, X. F.; Zhang, X.; Lu, M. W.

    2005-02-01

    Collocation method and Galerkin method have been dominant in the existing meshless methods. Galerkin-based meshless methods are computational intensive, whereas collocation-based meshless methods suffer from instability. A new efficient meshless method, meshless Galerkin lest-squares method (MGLS), is proposed in this paper to combine the advantages of Galerkin method and collocation method. The problem domain is divided into two subdomains, the interior domain and boundary domain. Galerkin method is applied in the boundary domain, whereas the least-squares method is applied in the interior domain.The proposed scheme elliminates the posibilities of spurious solutions as that in the least-square method if an incorrect boundary conditions are used. To investigate the accuracy and efficiency of the proposed method, a cantilevered beam and an infinite plate with a central circular hole are analyzed in detail and numerical results are compared with those obtained by Galerkin-based meshless method (GBMM), collocation-based meshless method (CBMM) and meshless weighted least squares method (MWLS). Numerical studies show that the accuracy of the proposed MGLS is much higher than that of CBMM and is close to, even better than, that of GBMM, while the computational cost is much less than that of GBMM.

  2. A Galerkin least squares approach to viscoelastic flow.

    SciTech Connect

    Rao, Rekha R.; Schunk, Peter Randall

    2015-10-01

    A Galerkin/least-squares stabilization technique is applied to a discrete Elastic Viscous Stress Splitting formulation of for viscoelastic flow. From this, a possible viscoelastic stabilization method is proposed. This method is tested with the flow of an Oldroyd-B fluid past a rigid cylinder, where it is found to produce inaccurate drag coefficients. Furthermore, it fails for relatively low Weissenberg number indicating it is not suited for use as a general algorithm. In addition, a decoupled approach is used as a way separating the constitutive equation from the rest of the system. A Pressure Poisson equation is used when the velocity and pressure are sought to be decoupled, but this fails to produce a solution when inflow/outflow boundaries are considered. However, a coupled pressure-velocity equation with a decoupled constitutive equation is successful for the flow past a rigid cylinder and seems to be suitable as a general-use algorithm.

  3. Galerkin v. least-squares Petrov-Galerkin projection in nonlinear model reduction

    NASA Astrophysics Data System (ADS)

    Carlberg, Kevin; Barone, Matthew; Antil, Harbir

    2017-02-01

    Least-squares Petrov-Galerkin (LSPG) model-reduction techniques such as the Gauss-Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge-Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be 'matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.

  4. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    SciTech Connect

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    2016-10-20

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.

  5. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    DOE PAGES

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    2016-10-20

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less

  6. Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem

    SciTech Connect

    Yoo, Jaechil

    1996-12-31

    Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.

  7. A Galerkin least squares method for time harmonic Maxwell equations using Nédélec elements

    NASA Astrophysics Data System (ADS)

    Jagalur-Mohan, J.; Feijóo, G.; Oberai, A.

    2013-02-01

    A Galerkin least squares finite element method for the solution of the time-harmonic Maxwell’s equations using Nédélec elements is proposed. This method appends a least-squares term, evaluated within element interiors, to the standard Galerkin method. For the case of lowest order hexahedral element, the numerical parameter multiplying this term is determined so as to optimize the dispersion properties of the resulting formulation. In particular, explicit expressions for this parameter are derived that lead to methods with no dispersion error for propagation along a specified direction and reduced dispersion error over all directions. It is noted that this method is easy to implement and does not add to the computational costs of the standard Galerkin method. The performance of this method is tested on problems of practical interest.

  8. Preprocessing Inconsistent Linear System for a Meaningful Least Squares Solution

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; Shaykhian, Gholam Ali

    2011-01-01

    Mathematical models of many physical/statistical problems are systems of linear equations. Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.

  9. The covariance matrix for the solution vector of an equality-constrained least-squares problem

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1976-01-01

    Methods are given for computing the covariance matrix for the solution vector of an equality-constrained least squares problem. The methods are matched to the solution algorithms given in the book, 'Solving Least Squares Problems.'

  10. Application of the Galerkin/least-squares formulation to the analysis of hypersonic flows. II - Flow past a double ellipse

    NASA Technical Reports Server (NTRS)

    Chalot, F.; Hughes, T. J. R.; Johan, Z.; Shakib, F.

    1991-01-01

    A finite element method for the compressible Navier-Stokes equations is introduced. The discretization is based on entropy variables. The methodology is developed within the framework of a Galerkin/least-squares formulation to which a discontinuity-capturing operator is added. Results for four test cases selected among those of the Workshop on Hypersonic Flows for Reentry Problems are presented.

  11. Least-squares finite element solution of 3D incompressible Navier-Stokes problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, Tsung-Liang; Povinelli, Louis A.

    1992-01-01

    Although significant progress has been made in the finite element solution of incompressible viscous flow problems. Development of more efficient methods is still needed before large-scale computation of 3D problems becomes feasible. This paper presents such a development. The most popular finite element method for the solution of incompressible Navier-Stokes equations is the classic Galerkin mixed method based on the velocity-pressure formulation. The mixed method requires the use of different elements to interpolate the velocity and the pressure in order to satisfy the Ladyzhenskaya-Babuska-Brezzi (LBB) condition for the existence of the solution. On the other hand, due to the lack of symmetry and positive definiteness of the linear equations arising from the mixed method, iterative methods for the solution of linear systems have been hard to come by. Therefore, direct Gaussian elimination has been considered the only viable method for solving the systems. But, for three-dimensional problems, the computer resources required by a direct method become prohibitively large. In order to overcome these difficulties, a least-squares finite element method (LSFEM) has been developed. This method is based on the first-order velocity-pressure-vorticity formulation. In this paper the LSFEM is extended for the solution of three-dimensional incompressible Navier-Stokes equations written in the following first-order quasi-linear velocity-pressure-vorticity formulation.

  12. Difficulty Factors, Distribution Effects, and the Least Squares Simplex Data Matrix Solution

    ERIC Educational Resources Information Center

    Ten Berge, Jos M. F.

    1972-01-01

    In the present article it is argued that the Least Squares Simplex Data Matrix Solution does not deal adequately with difficulty factors inasmuch as the theoretical foundation is insufficient. (Author/CB)

  13. Least-squares spectral element solution of incompressible Navier-Stokes equations with adaptive refinement

    NASA Astrophysics Data System (ADS)

    Ozcelikkale, Altug; Sert, Cuneyt

    2012-05-01

    Least-squares spectral element solution of steady, two-dimensional, incompressible flows are obtained by approximating velocity, pressure and vorticity variable set on Gauss-Lobatto-Legendre nodes. Constrained Approximation Method is used for h- and p-type nonconforming interfaces of quadrilateral elements. Adaptive solutions are obtained using a posteriori error estimates based on least squares functional and spectral coefficient. Effective use of p-refinement to overcome poor mass conservation drawback of least-squares formulation and successful use of h- and p-refinement together to solve problems with geometric singularities are demonstrated. Capabilities and limitations of the developed code are presented using Kovasznay flow, flow past a circular cylinder in a channel and backward facing step flow.

  14. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  15. Application of the Galerkin/least-squares formulation to the analysis of hypersonic flows. I - Flow over a two-dimensional ramp

    NASA Technical Reports Server (NTRS)

    Chalot, F.; Hughes, T. J. R.; Johan, Z.; Shakib, F.

    1991-01-01

    An FEM for the compressible Navier-Stokes equations is introduced. The discretization is based on entropy variables. The methodology is developed within the framework of a Galerkin/least-squares formulation to which a discontinuity-capturing operator is added. Results for three test cases selected among those of the Workshop on Hypersonic Flows for Reentry Problems are presented.

  16. Preprocessing in Matlab Inconsistent Linear System for a Meaningful Least Squares Solution

    NASA Technical Reports Server (NTRS)

    Sen, Symal K.; Shaykhian, Gholam Ali

    2011-01-01

    Mathematical models of many physical/statistical problems are systems of linear equations Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the . minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.

  17. A new family of stable elements for the Stokes problem based on a mixed Galerkin/least-squares finite element formulation

    NASA Technical Reports Server (NTRS)

    Franca, Leopoldo P.; Loula, Abimael F. D.; Hughes, Thomas J. R.; Miranda, Isidoro

    1989-01-01

    Adding to the classical Hellinger-Reissner formulation, a residual form of the equilibrium equation, a new Galerkin/least-squares finite element method is derived. It fits within the framework of a mixed finite element method and is stable for rather general combinations of stress and velocity interpolations, including equal-order discontinuous stress and continuous velocity interpolations which are unstable within the Galerkin approach. Error estimates are presented based on a generalization of the Babuska-Brezzi theory. Numerical results (not presented herein) have confirmed these estimates as well as the good accuracy and stability of the method.

  18. Fault Estimation for Fuzzy Delay Systems: A Minimum Norm Least Squares Solution Approach.

    PubMed

    Huang, Sheng-Juan; Yang, Guang-Hong

    2016-07-18

    This paper mainly focuses on the problem of fault estimation for a class of Takagi-Sugeno fuzzy systems with state delays. A minimum norm least squares solution (MNLSS) approach is first introduced to establish a fault estimation compensator, which is able to optimize the fault estimator. Compared with most of the existing fault estimation methods, the MNLSS-based fault estimation method can effectively decrease the effect of state errors on the accuracy of fault estimation. Finally, three examples are given to illustrate the effectiveness and merits of the proposed method.

  19. Least-Squares Spectral Method for the solution of a fractional advection-dispersion equation

    NASA Astrophysics Data System (ADS)

    Carella, Alfredo Raúl; Dorao, Carlos Alberto

    2013-01-01

    Fractional derivatives provide a general approach for modeling transport phenomena occurring in diverse fields. This article describes a Least Squares Spectral Method for solving advection-dispersion equations using Caputo or Riemann-Liouville fractional derivatives. A Gauss-Lobatto-Jacobi quadrature is implemented to approximate the singularities in the integrands arising from the fractional derivative definition. Exponential convergence rate of the operator is verified when increasing the order of the approximation. Solutions are calculated for fractional-time and fractional-space differential equations. Comparisons with finite difference schemes are included. A significant reduction in storage space is achieved by lowering the resolution requirements in the time coordinate.

  20. Least-squares finite element solutions for three-dimensional backward-facing step flow

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Hou, Lin-Jun; Lin, Tsung-Liang

    1993-01-01

    Comprehensive numerical solutions of the steady state incompressible viscous flow over a three-dimensional backward-facing step up to Re equals 800 are presented. The results are obtained by the least-squares finite element method (LSFEM) which is based on the velocity-pressure-vorticity formulation. The computed model is of the same size as that of Armaly's experiment. Three-dimensional phenomena are observed even at low Reynolds number. The calculated values of the primary reattachment length are in good agreement with experimental results.

  1. A new finite element formulation for computational fluid dynamics. IX - Fourier analysis of space-time Galerkin/least-squares algorithms

    NASA Technical Reports Server (NTRS)

    Shakib, Farzin; Hughes, Thomas J. R.

    1991-01-01

    A Fourier stability and accuracy analysis of the space-time Galerkin/least-squares method as applied to a time-dependent advective-diffusive model problem is presented. Two time discretizations are studied: a constant-in-time approximation and a linear-in-time approximation. Corresponding space-time predictor multi-corrector algorithms are also derived and studied. The behavior of the space-time algorithms is compared to algorithms based on semidiscrete formulations.

  2. Baseline configuration for GNSS attitude determination with an analytical least-squares solution

    NASA Astrophysics Data System (ADS)

    Chang, Guobin; Xu, Tianhe; Wang, Qianxin

    2016-12-01

    The GNSS attitude determination using carrier phase measurements with 4 antennas is studied on condition that the integer ambiguities have been resolved. The solution to the nonlinear least-squares is often obtained iteratively, however an analytical solution can exist for specific baseline configurations. The main aim of this work is to design this class of configurations. Both single and double difference measurements are treated which refer to the dedicated and non-dedicated receivers respectively. More realistic error models are employed in which the correlations between different measurements are given full consideration. The desired configurations are worked out. The configurations are rotation and scale equivariant and can be applied to both the dedicated and non-dedicated receivers. For these configurations, the analytical and optimal solution for the attitude is also given together with its error variance-covariance matrix.

  3. Confidence Region of Least Squares Solution for Single-Arc Observations

    NASA Astrophysics Data System (ADS)

    Principe, G.; Armellin, R.; Lewis, H.

    2016-09-01

    The total number of active satellites, rocket bodies, and debris larger than 10 cm is currently about 20,000. Considering all resident space objects larger than 1 cm this rises to an estimated minimum of 500,000 objects. Latest generation sensor networks will be able to detect small-size objects, producing millions of observations per day. Due to observability constraints it is likely that long gaps between observations will occur for small objects. This requires to determine the space object (SO) orbit and to accurately describe the associated uncertainty when observations are acquired on a single arc. The aim of this work is to revisit the classical least squares method taking advantage of the high order Taylor expansions enabled by differential algebra. In particular, the high order expansion of the residuals with respect to the state is used to implement an arbitrary order least squares solver, avoiding the typical approximations of differential correction methods. In addition, the same expansions are used to accurately characterize the confidence region of the solution, going beyond the classical Gaussian distributions. The properties and performances of the proposed method are discussed using optical observations of objects in LEO, HEO, and GEO.

  4. Least squares collocation applied to local gravimetric solutions from satellite gravity gradiometry data

    NASA Technical Reports Server (NTRS)

    Robbins, J. W.

    1985-01-01

    An autonomous spaceborne gravity gradiometer mission is being considered as a post Geopotential Research Mission project. The introduction of satellite diometry data to geodesy is expected to improve solid earth gravity models. The possibility of utilizing gradiometer data for the determination of pertinent gravimetric quantities on a local basis is explored. The analytical technique of least squares collocation is investigated for its usefulness in local solutions of this type. It is assumed, in the error analysis, that the vertical gravity gradient component of the gradient tensor is used as the raw data signal from which the corresponding reference gradients are removed to create the centered observations required in the collocation solution. The reference gradients are computed from a high degree and order geopotential model. The solution can be made in terms of mean or point gravity anomalies, height anomalies, or other useful gravimetric quantities depending on the choice of covariance types. Selected for this study were 30 x 30 foot mean gravity and height anomalies. Existing software and new software are utilized to implement the collocation technique. It was determined that satellite gradiometry data at an altitude of 200 km can be used successfully for the determination of 30 x 30 foot mean gravity anomalies to an accuracy of 9.2 mgal from this algorithm. It is shown that the resulting accuracy estimates are sensitive to gravity model coefficient uncertainties, data reduction assumptions and satellite mission parameters.

  5. Non-oscillatory and non-diffusive solution of convection problems by the iteratively reweighted least-squares finite element method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan

    1993-01-01

    A comparative description is presented for the least-squares FEM (LSFEM) for 2D steady-state pure convection problems. In addition to exhibiting better control of the streamline derivative than the streamline upwinding Petrov-Galerkin method, numerical convergence rates are obtained which show the LSFEM to be virtually optimal. The LSFEM is used as a framework for an iteratively reweighted LSFEM yielding nonoscillatory and nondiffusive solutions for problems with contact discontinuities; this method is shown to convect contact discontinuities without error when using triangular and bilinear elements.

  6. Phase-space finite elements in a least-squares solution of the transport equation

    SciTech Connect

    Drumm, C.; Fan, W.; Pautz, S.

    2013-07-01

    The linear Boltzmann transport equation is solved using a least-squares finite element approximation in the space, angular and energy phase-space variables. The method is applied to both neutral particle transport and also to charged particle transport in the presence of an electric field, where the angular and energy derivative terms are handled with the energy/angular finite elements approximation, in a manner analogous to the way the spatial streaming term is handled. For multi-dimensional problems, a novel approach is used for the angular finite elements: mapping the surface of a unit sphere to a two-dimensional planar region and using a meshing tool to generate a mesh. In this manner, much of the spatial finite-elements machinery can be easily adapted to handle the angular variable. The energy variable and the angular variable for one-dimensional problems make use of edge/beam elements, also building upon the spatial finite elements capabilities. The methods described here can make use of either continuous or discontinuous finite elements in space, angle and/or energy, with the use of continuous finite elements resulting in a smaller problem size and the use of discontinuous finite elements resulting in more accurate solutions for certain types of problems. The work described in this paper makes use of continuous finite elements, so that the resulting linear system is symmetric positive definite and can be solved with a highly efficient parallel preconditioned conjugate gradients algorithm. The phase-space finite elements capability has been built into the Sceptre code and applied to several test problems, including a simple one-dimensional problem with an analytic solution available, a two-dimensional problem with an isolated source term, showing how the method essentially eliminates ray effects encountered with discrete ordinates, and a simple one-dimensional charged-particle transport problem in the presence of an electric field. (authors)

  7. A geometric buildup algorithm for the solution of the distance geometry problem using least-squares approximation.

    PubMed

    Sit, Atilla; Wu, Zhijun; Yuan, Yaxiang

    2009-11-01

    We propose a new geometric buildup algorithm for the solution of the distance geometry problem in protein modeling, which can prevent the accumulation of the rounding errors in the buildup calculations successfully and also tolerate small errors in given distances. In this algorithm, we use all instead of a subset of available distances for the determination of each unknown atom and obtain the position of the atom by using a least-squares approximation instead of an exact solution to the system of distance equations. We show that the least-squares approximation can be obtained by using a special singular value decomposition method, which not only tolerates and minimizes small distance errors, but also prevents the rounding errors from propagation effectively, especially when the distance data is sparse. We describe the least-squares formulations and their solution methods, and present the test results from applying the new algorithm for the determination of a set of protein structures with varying degrees of availability and accuracy of the distances. We show that the new development of the algorithm increases the modeling ability, and improves stability and robustness of the geometric buildup approach significantly from both theoretical and practical points of view.

  8. Least-squares solution of incompressible Navier-Stokes equations with the p-version of finite elements

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Sonnad, Vijay

    1991-01-01

    A p-version of the least squares finite element method, based on the velocity-pressure-vorticity formulation, is developed for solving steady state incompressible viscous flow problems. The resulting system of symmetric and positive definite linear equations can be solved satisfactorily with the conjugate gradient method. In conjunction with the use of rapid operator application which avoids the formation of either element of global matrices, it is possible to achieve a highly compact and efficient solution scheme for the incompressible Navier-Stokes equations. Numerical results are presented for two-dimensional flow over a backward facing step. The effectiveness of simple outflow boundary conditions is also demonstrated.

  9. Divide-and-Conquer Solutions of Least-Squares Problems for Matrices with Displacement Structure

    DTIC Science & Technology

    1991-01-01

    matrices. Let Ff and Fb be nilpotent matrices. The matrix V(Ff,Fb)A =A - FfAFbT is called the displacement of A with respect to the displacement...displacement representations. LEMMA. Let E be an m X n matrix . If Ff and Fb are nilpotent , then the equation Vf,Fb)E = 2,a xjyT has the unique solution E = 1...Toeplitz matrix , and the use of divide- and-conquer (or doubling) techniques for computing (generators of) the Gohberg- Semencul formula. Let x and y denote

  10. Least-Squares Spectral Element Solutions to the CAA Workshop Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Lin, Wen H.; Chan, Daniel C.

    1997-01-01

    This paper presents computed results for some of the CAA benchmark problems via the acoustic solver developed at Rocketdyne CFD Technology Center under the corporate agreement between Boeing North American, Inc. and NASA for the Aerospace Industry Technology Program. The calculations are considered as benchmark testing of the functionality, accuracy, and performance of the solver. Results of these computations demonstrate that the solver is capable of solving the propagation of aeroacoustic signals. Testing of sound generation and on more realistic problems is now pursued for the industrial applications of this solver. Numerical calculations were performed for the second problem of Category 1 of the current workshop problems for an acoustic pulse scattered from a rigid circular cylinder, and for two of the first CAA workshop problems, i. e., the first problem of Category 1 for the propagation of a linear wave and the first problem of Category 4 for an acoustic pulse reflected from a rigid wall in a uniform flow of Mach 0.5. The aim for including the last two problems in this workshop is to test the effectiveness of some boundary conditions set up in the solver. Numerical results of the last two benchmark problems have been compared with their corresponding exact solutions and the comparisons are excellent. This demonstrates the high fidelity of the solver in handling wave propagation problems. This feature lends the method quite attractive in developing a computational acoustic solver for calculating the aero/hydrodynamic noise in a violent flow environment.

  11. Characteristic-Galerkin and Galerkin/Least-Squares Space-Time Formulations for the Advection-Diffusion Equation with Time-Dependent Domains

    DTIC Science & Technology

    1992-01-01

    formulations for the advection- diffusion equation with time-dependent domains 0. Pironneau Universite Paris 6. Analyse Numerique . T 55-65/5. 4 place...solution of the advection- dissipation equation, Correspondence to: Dr. 0. Pironneau, Universit6 Paris 6, Analyse Numerique . T 55-65/5, 4 place...for Partial Differential Equations (Cambridize Univ. Press. Cambridge. 1989). 1121 TE. Tezduvar. .I. Behr and J. Liou. A new strategy for finite

  12. Matrix-Free Polynomial-Based Nonlinear Least Squares Optimized Preconditioning and its Application to Discontinuous Galerkin Discretizations of the Euler Equations

    DTIC Science & Technology

    2015-06-01

    cient parallel code for applying the operator. Our method constructs a polynomial preconditioner using a nonlinear least squares (NLLS) algorithm. We show...apply the underlying operator. Such a preconditioner can be very attractive in scenarios where one has a highly efficient parallel code for applying...repeatedly solve a large system of linear equations where one has an extremely fast parallel code for applying an underlying fixed linear operator

  13. Bayesian least squares deconvolution

    NASA Astrophysics Data System (ADS)

    Asensio Ramos, A.; Petit, P.

    2015-11-01

    Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  14. Multilevel first-order system least squares for PDEs

    SciTech Connect

    McCormick, S.

    1994-12-31

    The purpose of this talk is to analyze the least-squares finite element method for second-order convection-diffusion equations written as a first-order system. In general, standard Galerkin finite element methods applied to non-self-adjoint elliptic equations with significant convection terms exhibit a variety of deficiencies, including oscillations or nonmonotonicity of the solution and poor approximation of its derivatives, A variety of stabilization techniques, such as up-winding, Petrov-Galerkin, and stream-line diffusion approximations, have been introduced to eliminate these and other drawbacks of standard Galerkin methods. Yet, although significant progress has been made, convection-diffusion problems remain among the more difficult problems to solve numerically. The first-order system least-squares approach promises to overcome these deficiencies. This talk develops ellipticity estimates and discretization error bounds for elliptic equations (with lower order terms) that are reformulated as a least-squares problem for an equivalent first-order system. The main results are the proofs of ellipticity and optimal convergence of multiplicative and additive solvers of the discrete systems.

  15. AKLSQF - LEAST SQUARES CURVE FITTING

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

  16. A Weighted Least Squares Approach To Robustify Least Squares Estimates.

    ERIC Educational Resources Information Center

    Lin, Chowhong; Davenport, Ernest C., Jr.

    This study developed a robust linear regression technique based on the idea of weighted least squares. In this technique, a subsample of the full data of interest is drawn, based on a measure of distance, and an initial set of regression coefficients is calculated. The rest of the data points are then taken into the subsample, one after another,…

  17. On the Numerical Solution of the Elliptic Monge—Ampère Equation in Dimension Two: A Least-Squares Approach

    NASA Astrophysics Data System (ADS)

    Dean, Edward J.; Glowinski, Roland

    During his outstanding career, Olivier Pironneau has addressed the solution of a large variety of problems from the Natural Sciences, Engineering and Finance to name a few, an evidence of his activity being the many articles and books he has written. It is the opinion of these authors, and former collaborators of O. Pironneau (cf. [DGP91]), that this chapter is well-suited to a volume honoring him. Indeed, the two pillars of the solution methodology that we are going to describe are: (1) a nonlinear least squares formulation in an appropriate Hilbert space, and (2) a mixed finite element approximation, reminiscent of the one used in [DGP91] and [GP79] for solving the Stokes and Navier-Stokes equations in their stream function-vorticity formulation; the contributions of O. Pironneau on the two above topics are well-known world wide. Last but not least, we will show that the solution method discussed here can be viewed as a solution method for a non-standard variant of the incompressible Navier-Stokes equations, an area where O. Pironneau has many outstanding and celebrated contributions (cf. [Pir89], for example).

  18. Weighted total least squares formulated by standard least squares theory

    NASA Astrophysics Data System (ADS)

    Amiri-Simkooei, A.; Jazaeri, S.

    2012-01-01

    This contribution presents a simple, attractive, and flexible formulation for the weighted total least squares (WTLS) problem. It is simple because it is based on the well-known standard least squares theory; it is attractive because it allows one to directly use the existing body of knowledge of the least squares theory; and it is flexible because it can be used to a broad field of applications in the error-invariable (EIV) models. Two empirical examples using real and simulated data are presented. The first example, a linear regression model, takes the covariance matrix of the coefficient matrix as QA = QnQm, while the second example, a 2-D affine transformation, takes a general structure of the covariance matrix QA. The estimates for the unknown parameters along with their standard deviations of the estimates are obtained for the two examples. The results are shown to be identical to those obtained based on the nonlinear Gauss-Helmert model (GHM). We aim to have an impartial evaluation of WTLS and GHM. We further explore the high potential capability of the presented formulation. One can simply obtain the covariance matrix of the WTLS estimates. In addition, one can generalize the orthogonal projectors of the standard least squares from which estimates for the residuals and observations (along with their covariance matrix), and the variance of the unit weight can directly be derived. Also, the constrained WTLS, variance component estimation for an EIV model, and the theory of reliability and data snooping can easily be established, which are in progress for future publications.

  19. Iterative methods for weighted least-squares

    SciTech Connect

    Bobrovnikova, E.Y.; Vavasis, S.A.

    1996-12-31

    A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.

  20. Optimal least-squares finite element method for elliptic problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Povinelli, Louis A.

    1991-01-01

    An optimal least squares finite element method is proposed for two dimensional and three dimensional elliptic problems and its advantages are discussed over the mixed Galerkin method and the usual least squares finite element method. In the usual least squares finite element method, the second order equation (-Delta x (Delta u) + u = f) is recast as a first order system (-Delta x p + u = f, Delta u - p = 0). The error analysis and numerical experiment show that, in this usual least squares finite element method, the rate of convergence for flux p is one order lower than optimal. In order to get an optimal least squares method, the irrotationality Delta x p = 0 should be included in the first order system.

  1. Fast Algorithms for Structured Least Squares and Total Least Squares Problems

    PubMed Central

    Kalsi, Anoop; O’Leary, Dianne P.

    2006-01-01

    We consider the problem of solving least squares problems involving a matrix M of small displacement rank with respect to two matrices Z1 and Z2. We develop formulas for the generators of the matrix M HM in terms of the generators of M and show that the Cholesky factorization of the matrix M HM can be computed quickly if Z1 is close to unitary and Z2 is triangular and nilpotent. These conditions are satisfied for several classes of matrices, including Toeplitz, block Toeplitz, Hankel, and block Hankel, and for matrices whose blocks have such structure. Fast Cholesky factorization enables fast solution of least squares problems, total least squares problems, and regularized total least squares problems involving these classes of matrices. PMID:27274922

  2. Fast Algorithms for Structured Least Squares and Total Least Squares Problems.

    PubMed

    Kalsi, Anoop; O'Leary, Dianne P

    2006-01-01

    We consider the problem of solving least squares problems involving a matrix M of small displacement rank with respect to two matrices Z 1 and Z 2. We develop formulas for the generators of the matrix M (H) M in terms of the generators of M and show that the Cholesky factorization of the matrix M (H) M can be computed quickly if Z 1 is close to unitary and Z 2 is triangular and nilpotent. These conditions are satisfied for several classes of matrices, including Toeplitz, block Toeplitz, Hankel, and block Hankel, and for matrices whose blocks have such structure. Fast Cholesky factorization enables fast solution of least squares problems, total least squares problems, and regularized total least squares problems involving these classes of matrices.

  3. 2-D weighted least-squares phase unwrapping

    DOEpatents

    Ghiglia, D.C.; Romero, L.A.

    1995-06-13

    Weighted values of interferometric signals are unwrapped by determining the least squares solution of phase unwrapping for unweighted values of the interferometric signals; and then determining the least squares solution of phase unwrapping for weighted values of the interferometric signals by preconditioned conjugate gradient methods using the unweighted solutions as preconditioning values. An output is provided that is representative of the least squares solution of phase unwrapping for weighted values of the interferometric signals. 6 figs.

  4. 2-D weighted least-squares phase unwrapping

    DOEpatents

    Ghiglia, Dennis C.; Romero, Louis A.

    1995-01-01

    Weighted values of interferometric signals are unwrapped by determining the least squares solution of phase unwrapping for unweighted values of the interferometric signals; and then determining the least squares solution of phase unwrapping for weighted values of the interferometric signals by preconditioned conjugate gradient methods using the unweighted solutions as preconditioning values. An output is provided that is representative of the least squares solution of phase unwrapping for weighted values of the interferometric signals.

  5. Deming's General Least Square Fitting

    SciTech Connect

    Rinard, Phillip

    1992-02-18

    DEM4-26 is a generalized least square fitting program based on Deming''s method. Functions built into the program for fitting include linear, quadratic, cubic, power, Howard''s, exponential, and Gaussian; others can easily be added. The program has the following capabilities: (1) entry, editing, and saving of data; (2) fitting of any of the built-in functions or of a user-supplied function; (3) plotting the data and fitted function on the display screen, with error limits if requested, and with the option of copying the plot to the printer; (4) interpolation of x or y values from the fitted curve with error estimates based on error limits selected by the user; and (5) plotting the residuals between the y data values and the fitted curve, with the option of copying the plot to the printer. If the plot is to be copied to a printer, GRAPHICS should be called from the operating system disk before the BASIC interpreter is loaded.

  6. The moving-least-squares-particle hydrodynamics method (MLSPH)

    SciTech Connect

    Dilts, G.

    1997-12-31

    An enhancement of the smooth-particle hydrodynamics (SPH) method has been developed using the moving-least-squares (MLS) interpolants of Lancaster and Salkauskas which simultaneously relieves the method of several well-known undesirable behaviors, including spurious boundary effects, inaccurate strain and rotation rates, pressure spikes at impact boundaries, and the infamous tension instability. The classical SPH method is derived in a novel manner by means of a Galerkin approximation applied to the Lagrangian equations of motion for continua using as basis functions the SPH kernel function multiplied by the particle volume. This derivation is then modified by simply substituting the MLS interpolants for the SPH Galerkin basis, taking care to redefine the particle volume and mass appropriately. The familiar SPH kernel approximation is now equivalent to a colocation-Galerkin method. Both classical conservative and recent non-conservative formulations of SPH can be derived and emulated. The non-conservative forms can be made conservative by adding terms that are zero within the approximation at the expense of boundary-value considerations. The familiar Monaghan viscosity is used. Test calculations of uniformly expanding fluids, the Swegle example, spinning solid disks, impacting bars, and spherically symmetric flow illustrate the superiority of the technique over SPH. In all cases it is seen that the marvelous ability of the MLS interpolants to add up correctly everywhere civilizes the noisy, unpredictable nature of SPH. Being a relatively minor perturbation of the SPH method, it is easily retrofitted into existing SPH codes. On the down side, computational expense at this point is significant, the Monaghan viscosity undoes the contribution of the MLS interpolants, and one-point quadrature (colocation) is not accurate enough. Solutions to these difficulties are being pursued vigorously.

  7. Understanding Least Squares through Monte Carlo Calculations

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2005-01-01

    The method of least squares (LS) is considered as an important data analysis tool available to physical scientists. The mathematics of linear least squares(LLS) is summarized in a very compact matrix rotation that renders it practically "formulaic".

  8. Least-Squares Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1990-01-01

    Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.

  9. Generalized adjustment by least squares ( GALS).

    USGS Publications Warehouse

    Elassal, A.A.

    1983-01-01

    The least-squares principle is universally accepted as the basis for adjustment procedures in the allied fields of geodesy, photogrammetry and surveying. A prototype software package for Generalized Adjustment by Least Squares (GALS) is described. The package is designed to perform all least-squares-related functions in a typical adjustment program. GALS is capable of supporting development of adjustment programs of any size or degree of complexity. -Author

  10. A least-squares method for second order noncoercive elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Ku, Jaeun

    2007-03-01

    In this paper, we consider a least-squares method proposed by Bramble, Lazarov and Pasciak (1998) which can be thought of as a stabilized Galerkin method for noncoercive problems with unique solutions. We modify their method by weakening the strength of the stabilization terms and present various new error estimates. The modified method has all the desirable properties of the original method; indeed, we shall show some theoretical properties that are not known for the original method. At the same time, our numerical experiments show an improvement of the method due to the modification.

  11. Collinearity in Least-Squares Analysis

    ERIC Educational Resources Information Center

    de Levie, Robert

    2012-01-01

    How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…

  12. Using Least Squares for Error Propagation

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2015-01-01

    The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…

  13. Three Perspectives on Teaching Least Squares

    ERIC Educational Resources Information Center

    Scariano, Stephen M.; Calzada, Maria

    2004-01-01

    The method of Least Squares is the most widely used technique for fitting a straight line to data, and it is typically discussed in several undergraduate courses. This article focuses on three developmentally different approaches for solving the Least Squares problem that are suitable for classroom exposition.

  14. Weighted conditional least-squares estimation

    SciTech Connect

    Booth, J.G.

    1987-01-01

    A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered.

  15. FRACVAL: Validation (nonlinear least squares method) of the solution of one-dimensional transport of decaying species in a discrete planar fracture with rock matrix diffusion

    SciTech Connect

    Gureghian, A.B.

    1990-08-01

    Analytical solutions based on the Laplace transforms are presented for the one-dimensional, transient, advective-dispersive transport of a reacting radionuclide through a discrete planar fracture with constant aperture subject to diffusion in the surrounding rock matrix where both regions of solute migration display residual concentrations. The dispersion-free solutions, which are of closed form, are also reported. The solution assumes that the ground-water flow regime is under steady-state and isothermal conditions and that the rock matrix is homogeneous, isotropic, and saturated with stagnant water. The verification of the solution was performed by means of related analytical solutions dealing with particular aspects of the transport problem under investigation on the one hand, and a numerical solution capable of handling the complete problem on the other. The integrals encountered in the general solution are evaluated by means of a composite Gauss-Legendre quadrature scheme. 9 refs., 8 figs., 32 tabs.

  16. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  17. A spectral mimetic least-squares method

    DOE PAGES

    Bochev, Pavel; Gerritsma, Marc

    2014-09-01

    We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are alsomore » satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.« less

  18. A spectral mimetic least-squares method

    SciTech Connect

    Bochev, Pavel; Gerritsma, Marc

    2014-09-01

    We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are also satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.

  19. Theoretical study of the incompressible Navier-Stokes equations by the least-squares method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Loh, Ching Y.; Povinelli, Louis A.

    1994-01-01

    Usually the theoretical analysis of the Navier-Stokes equations is conducted via the Galerkin method which leads to difficult saddle-point problems. This paper demonstrates that the least-squares method is a useful alternative tool for the theoretical study of partial differential equations since it leads to minimization problems which can often be treated by an elementary technique. The principal part of the Navier-Stokes equations in the first-order velocity-pressure-vorticity formulation consists of two div-curl systems, so the three-dimensional div-curl system is thoroughly studied at first. By introducing a dummy variable and by using the least-squares method, this paper shows that the div-curl system is properly determined and elliptic, and has a unique solution. The same technique then is employed to prove that the Stokes equations are properly determined and elliptic, and that four boundary conditions on a fixed boundary are required for three-dimensional problems. This paper also shows that under four combinations of non-standard boundary conditions the solution of the Stokes equations is unique. This paper emphasizes the application of the least-squares method and the div-curl method to derive a high-order version of differential equations and additional boundary conditions. In this paper, an elementary method (integration by parts) is used to prove Friedrichs' inequalities related to the div and curl operators which play an essential role in the analysis.

  20. Review of the Generalized Least Squares Method

    NASA Astrophysics Data System (ADS)

    Menke, William

    2014-09-01

    The generalized least squares (GLS) method uses both data and prior information to solve for a best-fitting set of model parameters. We review the method and present simplified derivations of its essential formulas. Concepts of resolution and covariance—essential in all of inverse theory—are applicable to GLS, but their meaning, and especially that of resolution, must be carefully interpreted. We introduce derivations that show that the quantity being resolved is the deviation of the solution from the prior model and that the covariance of the model depends on both the uncertainty in the data and the uncertainty in the prior information. On face value, the GLS formulas for resolution and covariance seem to require matrix inverses that may be difficult to calculate for the very large (but often sparse) linear systems encountered in practical inverse problems. We demonstrate how to organize the computations in an efficient manner and present MATLAB code that implements them. Finally, we formulate the well-understood problem of interpolating data with minimum curvature splines as an inverse problem and use it to illustrate the GLS method.

  1. Review of the Generalized Least Squares Method

    NASA Astrophysics Data System (ADS)

    Menke, William

    2015-01-01

    The generalized least squares (GLS) method uses both data and prior information to solve for a best-fitting set of model parameters. We review the method and present simplified derivations of its essential formulas. Concepts of resolution and covariance—essential in all of inverse theory—are applicable to GLS, but their meaning, and especially that of resolution, must be carefully interpreted. We introduce derivations that show that the quantity being resolved is the deviation of the solution from the prior model and that the covariance of the model depends on both the uncertainty in the data and the uncertainty in the prior information. On face value, the GLS formulas for resolution and covariance seem to require matrix inverses that may be difficult to calculate for the very large (but often sparse) linear systems encountered in practical inverse problems. We demonstrate how to organize the computations in an efficient manner and present MATLAB code that implements them. Finally, we formulate the well-understood problem of interpolating data with minimum curvature splines as an inverse problem and use it to illustrate the GLS method.

  2. Partial least squares and random sample consensus in outlier detection.

    PubMed

    Peng, Jiangtao; Peng, Silong; Hu, Yong

    2012-03-16

    A novel outlier detection method in partial least squares based on random sample consensus is proposed. The proposed algorithm repeatedly generates partial least squares solutions estimated from random samples and then tests each solution for the support from the complete dataset for consistency. A comparative study of the proposed method and leave-one-out cross validation in outlier detection on simulated data and near-infrared data of pharmaceutical tablets is presented. In addition, a comparison between the proposed method and PLS, RSIMPLS, PRM is provided. The obtained results demonstrate that the proposed method is highly efficient.

  3. Total Least-Squares Adjustment of Condition Equations

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard; Wieser, Andreas

    2010-05-01

    The usual least-squares adjustment within an Errors-in-Variables (EIV) model is often described as Total Least-Squares Solution (TLSS), just as the usual least-squares adjustment within a Random Effects Model (REM) has become popular under the name of Least-Squares Collocation (without trend). In comparison to the standard Gauss-Markov Model (GMM), the EIV-Model is less informative whereas the REM is more informative. It is known under which conditions exactly the GMM or the REM can be equivalently replaced by a model of Condition Equations or, more generally, by a Gauss-Helmert-Model (GHM). Such equivalency conditions are, however, still unknown for the EIV-Model once it is transformed into such a model of Condition Equations. In a first step, it is shown in this contribution how the respective residual vector and residual matrix would look like if the Total Least-Squares Solution is applied to condition equations with a random coefficient matrix to describe the transformation of the random error vector. The results are demonstrated using numeric examples which show that this approach may be valuable in its own right.

  4. Using Least Squares to Solve Systems of Equations

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2016-01-01

    The method of least squares (LS) yields exact solutions for the adjustable parameters when the number of data values n equals the number of parameters "p". This holds also when the fit model consists of "m" different equations and "m = p", which means that LS algorithms can be used to obtain solutions to systems of…

  5. Least squares polynomial fits and their accuracy

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1977-01-01

    Equations are presented which attempt to fit least squares polynomials to tables of date. It is concluded that much data are needed to reduce the measurement error standard deviation by a significant amount, however at certain points great accuracy is attained.

  6. Least squares estimation of avian molt rates

    USGS Publications Warehouse

    Johnson, D.H.

    1989-01-01

    A straightforward least squares method of estimating the rate at which birds molt feathers is presented, suitable for birds captured more than once during the period of molt. The date of molt onset can also be estimated. The method is applied to male and female mourning doves.

  7. BLS: Box-fitting Least Squares

    NASA Astrophysics Data System (ADS)

    Kovács, G.; Zucker, S.; Mazeh, T.

    2016-07-01

    BLS (Box-fitting Least Squares) is a box-fitting algorithm that analyzes stellar photometric time series to search for periodic transits of extrasolar planets. It searches for signals characterized by a periodic alternation between two discrete levels, with much less time spent at the lower level.

  8. Least-squares fitting Gompertz curve

    NASA Astrophysics Data System (ADS)

    Jukic, Dragan; Kralik, Gordana; Scitovski, Rudolf

    2004-08-01

    In this paper we consider the least-squares (LS) fitting of the Gompertz curve to the given nonconstant data (pi,ti,yi), i=1,...,m, m≥3. We give necessary and sufficient conditions which guarantee the existence of the LS estimate, suggest a choice of a good initial approximation and give some numerical examples.

  9. A Limitation with Least Squares Predictions

    ERIC Educational Resources Information Center

    Bittner, Teresa L.

    2013-01-01

    Although researchers have documented that some data make larger contributions than others to predictions made with least squares models, it is relatively unknown that some data actually make no contribution to the predictions produced by these models. This article explores such noncontributory data. (Contains 1 table and 2 figures.)

  10. Least-squares RTM with L1 norm regularisation

    NASA Astrophysics Data System (ADS)

    Wu, Di; Yao, Gang; Cao, Jingjie; Wang, Yanghua

    2016-10-01

    Reverse time migration (RTM), for imaging complex Earth models, is a reversal procedure of the forward modelling of seismic wavefields, and hence can be formulated as an inverse problem. The least-squares RTM method attempts to minimise the difference between the observed field data and the synthetic data generated by the migration image. It can reduce the artefacts in the images of a conventional RTM which uses an adjoint operator, instead of an inverse operator, for the migration. However, as the least-squares inversion provides an average solution with minimal variation, the resolution of the reflectivity image is compromised. This paper presents the least-squares RTM method with a model constraint defined by an L1-norm of the reflectivity image. For solving the least-squares RTM with L1 norm regularisation, the inversion is reformulated as a ‘basis pursuit de-noise (BPDN)’ problem, and is solved directly using an algorithm called ‘spectral projected gradient for L1 minimisation (SPGL1)’. Three numerical examples demonstrate the effectiveness of the method which can mitigate artefacts and produce clean images with significantly higher resolution than the least-squares RTM without such a constraint.

  11. A Least-Squares Transport Equation Compatible with Voids

    SciTech Connect

    Hansen, Jon; Peterson, Jacob; Morel, Jim; Ragusa, Jean; Wang, Yaqi

    2014-12-01

    Standard second-order self-adjoint forms of the transport equation, such as the even-parity, odd-parity, and self-adjoint angular flux equation, cannot be used in voids. Perhaps more important, they experience numerical convergence difficulties in near-voids. Here we present a new form of a second-order self-adjoint transport equation that has an advantage relative to standard forms in that it can be used in voids or near-voids. Our equation is closely related to the standard least-squares form of the transport equation with both equations being applicable in a void and having a nonconservative analytic form. However, unlike the standard least-squares form of the transport equation, our least-squares equation is compatible with source iteration. It has been found that the standard least-squares form of the transport equation with a linear-continuous finite-element spatial discretization has difficulty in the thick diffusion limit. Here we extensively test the 1D slab-geometry version of our scheme with respect to void solutions, spatial convergence rate, and the intermediate and thick diffusion limits. We also define an effective diffusion synthetic acceleration scheme for our discretization. Our conclusion is that our least-squares Sn formulation represents an excellent alternative to existing second-order Sn transport formulations

  12. Discontinuous Galerkin finite element solution for poromechanics

    NASA Astrophysics Data System (ADS)

    Liu, Ruijie

    This dissertation focuses on applying discontinuous Galerkin (DG) methods to poromechanics problems. A few challenges have been presented in traditional and popular continuous Galerkin (CG) finite element methods for solving complex coupled thermal, flow and solid mechanics. For example, nonphysical pore pressure oscillations often occur in CG solutions for poroelasticity problems with low permeability. A robust and practical numerical scheme for removing or alleviating the oscillation is not available. In modeling thermoporoelastoplasticity, CG methods require the use of very small time steps to obtain a convergent solution. The temperature profile predicted by CG methods in the fine mesh zones is often seriously polluted by large errors produced in coarse mesh zones in the case where the convection dominates the thermal process. The nonphysical oscillations in pore pressure and temperature solutions induced by CG methods at very early time stages seriously corrupt the solutions at longer time. We propose DG methods to handle these challenges because they are physics driven, provide local conservation of mass and momentum, have high stability and robustness, are locking-free, and because of their meshing and implementation capabilities. We first apply a family of DG methods, including Oden-Babuska-Baumann (OBB), Nonsymmetric Interior Penalty Galerkin (NIPG), Symmetric Interior Penalty Galerkin (SIPG) and Incomplete Interior Penalty Galerkin (IIPG), to 3D linear elasticity problems. This family of DG methods is tested and evaluated by using a cantilever beam problem with nearly incompressible materials. It is shown that DG methods are simple, robust and locking-free in dealing with nearly incompressible materials. Based on the success of DG methods in elasticity, we extend the DG theory into plasticity problems. A DG formulation has been implemented for solving 3D poroelasticity problems with low permeability. Numerical examples solved by DG methods demonstrate

  13. Least Squares Time-Series Synchronization in Image Acquisition Systems.

    PubMed

    Piazzo, Lorenzo; Raguso, Maria Carmela; Calzoletti, Luca; Seu, Roberto; Altieri, Bruno

    2016-07-18

    We consider an acquisition system constituted by an array of sensors scanning an image. Each sensor produces a sequence of readouts, called a time-series. In this framework, we discuss the image estimation problem when the time-series are affected by noise and by a time shift. In particular, we introduce an appropriate data model and consider the Least Squares (LS) estimate, showing that it has no closed form. However, the LS problem has a structure that can be exploited to simplify the solution. In particular, based on two known techniques, namely Separable Nonlinear Least Squares (SNLS) and Alternating Least Squares (ALS), we propose and analyze several practical estimation methods. As an additional contribution, we discuss the application of these methods to the data of the Photodetector Array Camera and Spectrometer (PACS), which is an infrared photometer onboard the Herschel satellite. In this context, we investigate the accuracy and the computational complexity of the methods, using both true and simulated data.

  14. Domain Decomposition Algorithms for First-Order System Least Squares Methods

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

  15. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2002-01-01

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  16. Least Squares Moving-Window Spectral Analysis.

    PubMed

    Lee, Young Jong

    2017-01-01

    Least squares regression is proposed as a moving-windows method for analysis of a series of spectra acquired as a function of external perturbation. The least squares moving-window (LSMW) method can be considered an extended form of the Savitzky-Golay differentiation for nonuniform perturbation spacing. LSMW is characterized in terms of moving-window size, perturbation spacing type, and intensity noise. Simulation results from LSMW are compared with results from other numerical differentiation methods, such as single-interval differentiation, autocorrelation moving-window, and perturbation correlation moving-window methods. It is demonstrated that this simple LSMW method can be useful for quantitative analysis of nonuniformly spaced spectral data with high frequency noise.

  17. Least squares restoration of multichannel images

    NASA Technical Reports Server (NTRS)

    Galatsanos, Nikolas P.; Katsaggelos, Aggelos K.; Chin, Roland T.; Hillery, Allen D.

    1991-01-01

    Multichannel restoration using both within- and between-channel deterministic information is considered. A multichannel image is a set of image planes that exhibit cross-plane similarity. Existing optimal restoration filters for single-plane images yield suboptimal results when applied to multichannel images, since between-channel information is not utilized. Multichannel least squares restoration filters are developed using the set theoretic and the constrained optimization approaches. A geometric interpretation of the estimates of both filters is given. Color images (three-channel imagery with red, green, and blue components) are considered. Constraints that capture the within- and between-channel properties of color images are developed. Issues associated with the computation of the two estimates are addressed. A spatially adaptive, multichannel least squares filter that utilizes local within- and between-channel image properties is proposed. Experiments using color images are described.

  18. Least Squares Estimation Without Priors or Supervision

    PubMed Central

    Raphan, Martin; Simoncelli, Eero P.

    2011-01-01

    Selection of an optimal estimator typically relies on either supervised training samples (pairs of measurements and their associated true values) or a prior probability model for the true values. Here, we consider the problem of obtaining a least squares estimator given a measurement process with known statistics (i.e., a likelihood function) and a set of unsupervised measurements, each arising from a corresponding true value drawn randomly from an unknown distribution. We develop a general expression for a nonparametric empirical Bayes least squares (NEBLS) estimator, which expresses the optimal least squares estimator in terms of the measurement density, with no explicit reference to the unknown (prior) density. We study the conditions under which such estimators exist and derive specific forms for a variety of different measurement processes. We further show that each of these NEBLS estimators may be used to express the mean squared estimation error as an expectation over the measurement density alone, thus generalizing Stein’s unbiased risk estimator (SURE), which provides such an expression for the additive gaussian noise case. This error expression may then be optimized over noisy measurement samples, in the absence of supervised training data, yielding a generalized SURE-optimized parametric least squares (SURE2PLS) estimator. In the special case of a linear parameterization (i.e., a sum of nonlinear kernel functions), the objective function is quadratic, and we derive an incremental form for learning this estimator from data. We also show that combining the NEBLS form with its corresponding generalized SURE expression produces a generalization of the score-matching procedure for parametric density estimation. Finally, we have implemented several examples of such estimators, and we show that their performance is comparable to their optimal Bayesian or supervised regression counterparts for moderate to large amounts of data. PMID:21105827

  19. Least-squares Gaussian beam migration

    NASA Astrophysics Data System (ADS)

    Yuan, Maolin; Huang, Jianping; Liao, Wenyuan; Jiang, Fuyou

    2017-02-01

    A theory of least-squares Gaussian beam migration (LSGBM) is presented to optimally estimate a subsurface reflectivity. In the iterative inversion scheme, a Gaussian beam (GB) propagator is used as the kernel of linearized forward modeling (demigration) and its adjoint (migration). Born approximation based GB demigration relies on the calculation of Green’s function by a Gaussian-beam summation for the downward and upward wavefields. The adjoint operator of GB demigration accounts for GB prestack depth migration under the cross-correlation imaging condition, where seismic traces are processed one by one for each shot. A numerical test on the point diffractors model suggests that GB demigration can successfully simulate primary scattered data, while migration (adjoint) can yield a corresponding image. The GB demigration/migration algorithms are used for the least-squares migration scheme to deblur conventional migrated images. The proposed LSGBM is illustrated with two synthetic data for a four-layer model and the Marmousi2 model. Numerical results show that LSGBM, compared to migration (adjoint) with GBs, produces images with more balanced amplitude, higher resolution and even fewer artifacts. Additionally, the LSGBM shows a robust convergence rate.

  20. Total least squares for anomalous change detection

    SciTech Connect

    Theiler, James P; Matsekh, Anna M

    2010-01-01

    A family of difference-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQ-based anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and furthermore it is shown to be equivalent to the optimized covariance equalization algorithm. What whitened TLSQ offers, in addition to connecting with a common language the derivations of two of the most popular anomalous change detection algorithms - chronochrome and covariance equalization - is a generalization of these algorithms with the potential for better performance.

  1. Augmented classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2004-02-03

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  2. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  3. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  4. Hybrid least squares multivariate spectral analysis methods

    DOEpatents

    Haaland, David M.

    2004-03-23

    A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.

  5. Classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.

    2002-01-01

    An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.

  6. Multisplitting for linear, least squares and nonlinear problems

    SciTech Connect

    Renaut, R.

    1996-12-31

    In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.

  7. A new least-squares transport equation compatible with voids

    SciTech Connect

    Hansen, J. B.; Morel, J. E.

    2013-07-01

    We define a new least-squares transport equation that is applicable in voids, can be solved using source iteration with diffusion-synthetic acceleration, and requires only the solution of an independent set of second-order self-adjoint equations for each direction during each source iteration. We derive the equation, discretize it using the S{sub n} method in conjunction with a linear-continuous finite-element method in space, and computationally demonstrate various of its properties. (authors)

  8. Simplified neural networks for solving linear least squares and total least squares problems in real time.

    PubMed

    Cichocki, A; Unbehauen, R

    1994-01-01

    In this paper a new class of simplified low-cost analog artificial neural networks with on chip adaptive learning algorithms are proposed for solving linear systems of algebraic equations in real time. The proposed learning algorithms for linear least squares (LS), total least squares (TLS) and data least squares (DLS) problems can be considered as modifications and extensions of well known algorithms: the row-action projection-Kaczmarz algorithm and/or the LMS (Adaline) Widrow-Hoff algorithms. The algorithms can be applied to any problem which can be formulated as a linear regression problem. The correctness and high performance of the proposed neural networks are illustrated by extensive computer simulation results.

  9. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE PAGES

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    2016-11-28

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  10. A Christoffel function weighted least squares algorithm for collocation approximations

    SciTech Connect

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    2016-11-28

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis to motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.

  11. Least-squares finite element methods for compressible Euler equations

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Carey, G. F.

    1990-01-01

    A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.

  12. Least-Squares Self-Calibration of Imaging Array Data

    NASA Technical Reports Server (NTRS)

    Arendt, R. G.; Moseley, S. H.; Fixsen, D. J.

    2004-01-01

    When arrays are used to collect multiple appropriately-dithered images of the same region of sky, the resulting data set can be calibrated using a least-squares minimization procedure that determines the optimal fit between the data and a model of that data. The model parameters include the desired sky intensities as well as instrument parameters such as pixel-to-pixel gains and offsets. The least-squares solution simultaneously provides the formal error estimates for the model parameters. With a suitable observing strategy, the need for separate calibration observations is reduced or eliminated. We show examples of this calibration technique applied to HST NICMOS observations of the Hubble Deep Fields and simulated SIRTF IRAC observations.

  13. Solving linear inequalities in a least squares sense

    SciTech Connect

    Bramley, R.; Winnicka, B.

    1994-12-31

    Let A {element_of} {Re}{sup mxn} be an arbitrary real matrix, and let b {element_of} {Re}{sup m} a given vector. A familiar problem in computational linear algebra is to solve the system Ax = b in a least squares sense; that is, to find an x* minimizing {parallel}Ax {minus} b{parallel}, where {parallel} {center_dot} {parallel} refers to the vector two-norm. Such an x* solves the normal equations A{sup T}(Ax {minus} b) = 0, and the optimal residual r* = b {minus} Ax* is unique (although x* need not be). The least squares problem is usually interpreted as corresponding to multiple observations, represented by the rows of A and b, on a vector of data x. The observations may be inconsistent, and in this case a solution is sought that minimizes the norm of the residuals. A less familiar problem to numerical linear algebraists is the solution of systems of linear inequalities Ax {le} b in a least squares sense, but the motivation is similar: if a set of observations places upper or lower bounds on linear combinations of variables, the authors want to find x* minimizing {parallel} (Ax {minus} b){sub +} {parallel}, where the i{sup th} component of the vector v{sub +} is the maximum of zero and the i{sup th} component of v.

  14. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  15. Robust inverse kinematics using damped least squares with dynamic weighting

    NASA Technical Reports Server (NTRS)

    Schinstock, D. E.; Faddis, T. N.; Greenway, R. B.

    1994-01-01

    This paper presents a general method for calculating the inverse kinematics with singularity and joint limit robustness for both redundant and non-redundant serial-link manipulators. Damped least squares inverse of the Jacobian is used with dynamic weighting matrices in approximating the solution. This reduces specific joint differential vectors. The algorithm gives an exact solution away from the singularities and joint limits, and an approximate solution at or near the singularities and/or joint limits. The procedure is here implemented for a six d.o.f. teleoperator and a well behaved slave manipulator resulted under teleoperational control.

  16. A Christoffel function weighted least squares algorithm for collocation approximations [The Christoffel least squares algorithm for collocation approximations

    DOE PAGES

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    2016-11-28

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  17. Solution of a few nonlinear problems in aerodynamics by the finite elements and functional least squares methods. Ph.D. Thesis - Paris Univ.; [mathematical models of transonic flow using nonlinear equations

    NASA Technical Reports Server (NTRS)

    Periaux, J.

    1979-01-01

    The numerical simulation of the transonic flows of idealized fluids and of incompressible viscous fluids, by the nonlinear least squares methods is presented. The nonlinear equations, the boundary conditions, and the various constraints controlling the two types of flow are described. The standard iterative methods for solving a quasi elliptical nonlinear equation with partial derivatives are reviewed with emphasis placed on two examples: the fixed point method applied to the Gelder functional in the case of compressible subsonic flows and the Newton method used in the technique of decomposition of the lifting potential. The new abstract least squares method is discussed. It consists of substituting the nonlinear equation by a problem of minimization in a H to the minus 1 type Sobolev functional space.

  18. Götterdämmerung over total least squares

    NASA Astrophysics Data System (ADS)

    Malissiovas, G.; Neitzel, F.; Petrovic, S.

    2016-06-01

    The traditional way of solving non-linear least squares (LS) problems in Geodesy includes a linearization of the functional model and iterative solution of a nonlinear equation system. Direct solutions for a class of nonlinear adjustment problems have been presented by the mathematical community since the 1980s, based on total least squares (TLS) algorithms and involving the use of singular value decomposition (SVD). However, direct LS solutions for this class of problems have been developed in the past also by geodesists. In this contributionwe attempt to establish a systematic approach for direct solutions of non-linear LS problems from a "geodetic" point of view. Therefore, four non-linear adjustment problems are investigated: the fit of a straight line to given points in 2D and in 3D, the fit of a plane in 3D and the 2D symmetric similarity transformation of coordinates. For all these problems a direct LS solution is derived using the same methodology by transforming the problem to the solution of a quadratic or cubic algebraic equation. Furthermore, by applying TLS all these four problems can be transformed to solving the respective characteristic eigenvalue equations. It is demonstrated that the algebraic equations obtained in this way are identical with those resulting from the LS approach. As a by-product of this research two novel approaches are presented for the TLS solutions of fitting a straight line to 3D and the 2D similarity transformation of coordinates. The derived direct solutions of the four considered problems are illustrated on examples from the literature and also numerically compared to published iterative solutions.

  19. Least-Squares, Continuous Sensitivity Analysis for Nonlinear Fluid-Structure Interaction

    DTIC Science & Technology

    2009-08-20

    computational fluid problems using continuous sensitivity methods [128]. Most non -fluid applications of CSE in the literature have been limited to 1D scalar...squares functional statement of the problem. This is also mentioned in several early ( non -FSI) papers covered by the Eason’s survey of least-squares...operators) and the Galerkin weighted residual method (which exhibits problems with non -self-adjoint systems and is subject to the restrictions of the LBB

  20. Least-squares methods involving the H{sup -1} inner product

    SciTech Connect

    Pasciak, J.

    1996-12-31

    Least-squares methods are being shown to be an effective technique for the solution of elliptic boundary value problems. However, the methods differ depending on the norms in which they are formulated. For certain problems, it is much more natural to consider least-squares functionals involving the H{sup -1} norm. Such norms give rise to improved convergence estimates and better approximation to problems with low regularity solutions. In addition, fewer new variables need to be added and less stringent boundary conditions need to be imposed. In this talk, I will describe some recent developments involving least-squares methods utilizing the H{sup -1} inner product.

  1. Estimating parameter of influenza transmission using regularized least square

    NASA Astrophysics Data System (ADS)

    Nuraini, N.; Syukriah, Y.; Indratno, S. W.

    2014-02-01

    Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.

  2. Evaluation of fuzzy inference systems using fuzzy least squares

    NASA Technical Reports Server (NTRS)

    Barone, Joseph M.

    1992-01-01

    Efforts to develop evaluation methods for fuzzy inference systems which are not based on crisp, quantitative data or processes (i.e., where the phenomenon the system is built to describe or control is inherently fuzzy) are just beginning. This paper suggests that the method of fuzzy least squares can be used to perform such evaluations. Regressing the desired outputs onto the inferred outputs can provide both global and local measures of success. The global measures have some value in an absolute sense, but they are particularly useful when competing solutions (e.g., different numbers of rules, different fuzzy input partitions) are being compared. The local measure described here can be used to identify specific areas of poor fit where special measures (e.g., the use of emphatic or suppressive rules) can be applied. Several examples are discussed which illustrate the applicability of the method as an evaluation tool.

  3. General spline filters for discontinuous Galerkin solutions

    PubMed Central

    Peters, Jörg

    2015-01-01

    The discontinuous Galerkin (dG) method outputs a sequence of polynomial pieces. Post-processing the sequence by Smoothness-Increasing Accuracy-Conserving (SIAC) convolution not only increases the smoothness of the sequence but can also improve its accuracy and yield superconvergence. SIAC convolution is considered optimal if the SIAC kernels, in the form of a linear combination of B-splines of degree d, reproduce polynomials of degree 2d. This paper derives simple formulas for computing the optimal SIAC spline coefficients for the general case including non-uniform knots. PMID:26594090

  4. General spline filters for discontinuous Galerkin solutions.

    PubMed

    Peters, Jörg

    2015-09-01

    The discontinuous Galerkin (dG) method outputs a sequence of polynomial pieces. Post-processing the sequence by Smoothness-Increasing Accuracy-Conserving (SIAC) convolution not only increases the smoothness of the sequence but can also improve its accuracy and yield superconvergence. SIAC convolution is considered optimal if the SIAC kernels, in the form of a linear combination of B-splines of degree d, reproduce polynomials of degree 2d. This paper derives simple formulas for computing the optimal SIAC spline coefficients for the general case including non-uniform knots.

  5. Multilevel solvers of first-order system least-squares for Stokes equations

    SciTech Connect

    Lai, Chen-Yao G.

    1996-12-31

    Recently, The use of first-order system least squares principle for the approximate solution of Stokes problems has been extensively studied by Cai, Manteuffel, and McCormick. In this paper, we study multilevel solvers of first-order system least-squares method for the generalized Stokes equations based on the velocity-vorticity-pressure formulation in three dimensions. The least-squares functionals is defined to be the sum of the L{sup 2}-norms of the residuals, which is weighted appropriately by the Reynolds number. We develop convergence analysis for additive and multiplicative multilevel methods applied to the resulting discrete equations.

  6. Orthogonalizing EM: A design-based least squares algorithm.

    PubMed

    Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z G

    We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p. Supplementary materials for this article are available online.

  7. Orthogonalizing EM: A design-based least squares algorithm

    PubMed Central

    Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z. G.

    2016-01-01

    We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p. Supplementary materials for this article are available online. PMID:27499558

  8. Using Weighted Least Squares Regression for Obtaining Langmuir Sorption Constants

    Technology Transfer Automated Retrieval System (TEKTRAN)

    One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...

  9. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  10. Source Localization using Stochastic Approximation and Least Squares Methods

    SciTech Connect

    Sahyoun, Samir S.; Djouadi, Seddik M.; Qi, Hairong; Drira, Anis

    2009-03-05

    This paper presents two approaches to locate the source of a chemical plume; Nonlinear Least Squares and Stochastic Approximation (SA) algorithms. Concentration levels of the chemical measured by special sensors are used to locate this source. Non-linear Least Squares technique is applied at different noise levels and compared with the localization using SA. For a noise corrupted data collected from a distributed set of chemical sensors, we show that SA methods are more efficient than Least Squares method. SA methods are often better at coping with noisy input information than other search methods.

  11. A least squares closure approximation for liquid crystalline polymers

    NASA Astrophysics Data System (ADS)

    Sievenpiper, Traci Ann

    2011-12-01

    An introduction to existing closure schemes for the Doi-Hess kinetic theory of liquid crystalline polymers is provided. A new closure scheme is devised based on a least squares fit of a linear combination of the Doi, Tsuji-Rey, Hinch-Leal I, and Hinch-Leal II closure schemes. The orientation tensor and rate-of-strain tensor are fit separately using data generated from the kinetic solution of the Smoluchowski equation. The known behavior of the kinetic solution and existing closure schemes at equilibrium is compared with that of the new closure scheme. The performance of the proposed closure scheme in simple shear flow for a variety of shear rates and nematic polymer concentrations is examined, along with that of the four selected existing closure schemes. The flow phase diagram for the proposed closure scheme under the conditions of shear flow is constructed and compared with that of the kinetic solution. The study of the closure scheme is extended to the simulation of nematic polymers in plane Couette cells. The results are compared with existing kinetic simulations for a Landau-deGennes mesoscopic model with the application of a parameterized closure approximation. The proposed closure scheme is shown to produce a reasonable approximation to the kinetic results in the case of simple shear flow and plane Couette flow.

  12. An element-free Galerkin (EFG) method for numerical solution of the coupled Schrödinger-KdV equations

    NASA Astrophysics Data System (ADS)

    Liu, Yong-Qing; Cheng, Rong-Jun; Ge, Hong-Xia

    2013-10-01

    The present paper deals with the numerical solution of the coupled Schrödinger-KdV equations using the element-free Galerkin (EFG) method which is based on the moving least-square approximation. Instead of traditional mesh oriented methods such as the finite difference method (FDM) and the finite element method (FEM), this method needs only scattered nodes in the domain. For this scheme, a variational method is used to obtain discrete equations and the essential boundary conditions are enforced by the penalty method. In numerical experiments, the results are presented and compared with the findings of the finite element method, the radial basis functions method, and an analytical solution to confirm the good accuracy of the presented scheme.

  13. Iterative least-squares solvers for the Navier-Stokes equations

    SciTech Connect

    Bochev, P.

    1996-12-31

    In the recent years finite element methods of least-squares type have attracted considerable attention from both mathematicians and engineers. This interest has been motivated, to a large extent, by several valuable analytic and computational properties of least-squares variational principles. In particular, finite element methods based on such principles circumvent Ladyzhenskaya-Babuska-Brezzi condition and lead to symmetric and positive definite algebraic systems. Thus, it is not surprising that numerical solution of fluid flow problems has been among the most promising and successful applications of least-squares methods. In this context least-squares methods offer significant theoretical and practical advantages in the algorithmic design, which makes resulting methods suitable, among other things, for large-scale numerical simulations.

  14. Iterative total least-squares image reconstruction algorithm for optical tomography by the conjugate gradient method.

    PubMed

    Zhu, W; Wang, Y; Yao, Y; Chang, J; Graber, H L; Barbour, R L

    1997-04-01

    We present an iterative total least-squares algorithm for computing images of the interior structure of highly scattering media by using the conjugate gradient method. For imaging the dense scattering media in optical tomography, a perturbation approach has been described previously [Y. Wang et al., Proc. SPIE 1641, 58 (1992); R. L. Barbour et al., in Medical Optical Tomography: Functional Imaging and Monitoring (Society of Photo-Optical Instrumentation Engineers, Bellingham, Wash., 1993), pp. 87-120], which solves a perturbation equation of the form W delta x = delta I. In order to solve this equation, least-squares or regularized least-squares solvers have been used in the past to determine best fits to the measurement data delta I while assuming that the operator matrix W is accurate. In practice, errors also occur in the operator matrix. Here we propose an iterative total least-squares (ITLS) method that minimizes the errors in both weights and detector readings. Theoretically, the total least-squares (TLS) solution is given by the singular vector of the matrix [W/ delta I] associated with the smallest singular value. The proposed ITLS method obtains this solution by using a conjugate gradient method that is particularly suitable for very large matrices. Simulation results have shown that the TLS method can yield a significantly more accurate result than the least-squares method.

  15. Applied Algebra: The Modeling Technique of Least Squares

    ERIC Educational Resources Information Center

    Zelkowski, Jeremy; Mayes, Robert

    2008-01-01

    The article focuses on engaging students in algebra through modeling real-world problems. The technique of least squares is explored, encouraging students to develop a deeper understanding of the method. (Contains 2 figures and a bibliography.)

  16. A novel extended kernel recursive least squares algorithm.

    PubMed

    Zhu, Pingping; Chen, Badong; Príncipe, José C

    2012-08-01

    In this paper, a novel extended kernel recursive least squares algorithm is proposed combining the kernel recursive least squares algorithm and the Kalman filter or its extensions to estimate or predict signals. Unlike the extended kernel recursive least squares (Ex-KRLS) algorithm proposed by Liu, the state model of our algorithm is still constructed in the original state space and the hidden state is estimated using the Kalman filter. The measurement model used in hidden state estimation is learned by the kernel recursive least squares algorithm (KRLS) in reproducing kernel Hilbert space (RKHS). The novel algorithm has more flexible state and noise models. We apply this algorithm to vehicle tracking and the nonlinear Rayleigh fading channel tracking, and compare the tracking performances with other existing algorithms.

  17. Parallel Nonnegative Least Squares Solvers for Model Order Reduction

    DTIC Science & Technology

    2016-03-01

    NNLS problems that arise when the Energy Conserving Sampling and Weighting hyper-reduction procedure is used when constructing a reduced-order model...ScaLAPACK and performance results are presented. nonnegative least squares, model order reduction, hyper-reduction, Energy Conserving Sampling and...nonnegative least squares (NNLS) prob- lem comes from the embedded hyper-reduction step referred to as Energy Conserv- ing Sampling and Weighting (ECSW

  18. Performance Analysis of the Least-Squares Estimator in Astrometry

    NASA Astrophysics Data System (ADS)

    Lobos, Rodrigo A.; Silva, Jorge F.; Mendez, Rene A.; Orchard, Marcos

    2015-11-01

    We characterize the performance of the widely-used least-squares estimator in astrometry in terms of a comparison with the Cramer-Rao lower variance bound. In this inference context the performance of the least-squares estimator does not offer a closed-form expression, but a new result is presented (Theorem 1) where both the bias and the mean-square-error of the least-squares estimator are bounded and approximated analytically, in the latter case in terms of a nominal value and an interval around it. From the predicted nominal value we analyze how efficient is the least-squares estimator in comparison with the minimum variance Cramer-Rao bound. Based on our results, we show that, for the high signal-to-noise ratio regime, the performance of the least-squares estimator is significantly poorer than the Cramer-Rao bound, and we characterize this gap analytically. On the positive side, we show that for the challenging low signal-to-noise regime (attributed to either a weak astronomical signal or a noise-dominated condition) the least-squares estimator is near optimal, as its performance asymptotically approaches the Cramer-Rao bound. However, we also demonstrate that, in general, there is no unbiased estimator for the astrometric position that can precisely reach the Cramer-Rao bound. We validate our theoretical analysis through simulated digital-detector observations under typical observing conditions. We show that the nominal value for the mean-square-error of the least-squares estimator (obtained from our theorem) can be used as a benchmark indicator of the expected statistical performance of the least-squares method under a wide range of conditions. Our results are valid for an idealized linear (one-dimensional) array detector where intra-pixel response changes are neglected, and where flat-fielding is achieved with very high accuracy.

  19. Seismic Sensor orientation by complex linear least squares

    NASA Astrophysics Data System (ADS)

    Grigoli, Francesco; Cesca, Simone; Krieger, Lars; Olcay, Manuel; Tassara, Carlos; Sobiesiak, Monika; Dahm, Torsten

    2014-05-01

    Poorly known orientation of the horizontal components of seismic sensors is a common problem that limits data analysis and interpretation for several acquisition setups, including linear arrays of geophones deployed in borehole installations, ocean bottom seismometers deployed at the sea-floor and surface seismic arrays. To solve this problem we propose an inversion method based on complex linear least squares method. Relative orientation angles, with respect to a reference sensor, are retrieved by minimizing the l2-norm between the complex traces (hodograms) of adjacent pairs of sensors in a least-squares sense. The absolute orientations are obtained in a second step by the polarization analysis of stacked seismograms of a seismic event with known location. This methodology can be applied without restrictions, if the plane wave approximation for wavefields recorded by each pair of sensors is valid. In most cases, it is possible to satisfy this condition by low-pass filtering the recorded waveform. The main advantage of our methodology is that, finding the estimation of the relative orientations of seismic sensors in complex domain is a linear inverse problem, which allows a direct solution corresponding to the global minimum of a misfit function. It is also possible to use simultaneously more than one independent dataset (e.g. using several seismic events simultaneously) to better constrain the solution of the inverse problem itself. Furthermore, by a computational point of view, our method results faster than the relative orientation methods based on waveform cross-correlation. Our methodology can be also applied for testing the correct orientation/alignment of multicomponent land stations in seismological arrays or temporary networks and for determining the absolute orientation of OBS stations and borehole arrays. We first apply our method to real data resembling two different acquisition setups: a borehole sensor array deployed in a gas field located in the

  20. Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2012-01-01

    A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed

  1. From direct-space discrepancy functions to crystallographic least squares.

    PubMed

    Giacovazzo, Carmelo

    2015-01-01

    Crystallographic least squares are a fundamental tool for crystal structure analysis. In this paper their properties are derived from functions estimating the degree of similarity between two electron-density maps. The new approach leads also to modifications of the standard least-squares procedures, potentially able to improve their efficiency. The role of the scaling factor between observed and model amplitudes is analysed: the concept of unlocated model is discussed and its scattering contribution is combined with that arising from the located model. Also, the possible use of an ancillary parameter, to be associated with the classical weight related to the variance of the observed amplitudes, is studied. The crystallographic discrepancy factors, basic tools often combined with least-squares procedures in phasing approaches, are analysed. The mathematical approach here described includes, as a special case, the so-called vector refinement, used when accurate estimates of the target phases are available.

  2. On realizations of least-squares estimation and Kalman filtering by systolic arrays

    NASA Technical Reports Server (NTRS)

    Chen, M. J.; Yao, K.

    1986-01-01

    Least-squares (LS) estimation is a basic operation in many signal processing problems. Given y = Ax + v, where A is a m x n coefficient matrix, y is a m x 1 observation vector, and v is a m x 1 zero mean white noise vector, a simple least-squares solution is finding the estimated vector x which minimizes the norm of /Ax-y/. It is well known that for an ill-conditioned matrix A, solving least-squares problems by orthogonal triangular (QR) decomposition and back substitution has robust numerical properties under finite word length effect since 2-norm is preserved. Many fast algorithms have been proposed and applied to systolic arrays. Gentleman-Kung (1981) first presented the trianglular systolic array for a basic Givens reduction. McWhirter (1983) used this array structure to find the least-squares estimation errors. Then by geometric approach, several different systolic array realizations of the recursive least-squares estimation algorithms of Lee et al (1981) were derived by Kalson-Yao (1985). Basic QR decomposition algorithms are considered in this paper and it is found that under a one-row time updating situation, the Householder transformation degenerates to a simple Givens reduction. Next, an improved least-squares estimation algorithm is derived by considering a modified version of fast Givens reduction. From this approach, the basic relationship between Givens reduction and Modified-Gram-Schmidt transformation can easily be understood. This improved algorithm also has simpler computational and inter-cell connection complexities while compared with other known least-squares algorithms and is more realistic for systolic array implementation.

  3. Least squares in calibration: dealing with uncertainty in x.

    PubMed

    Tellinghuisen, Joel

    2010-08-01

    The least-squares (LS) analysis of data with error in x and y is generally thought to yield best results when carried out by minimizing the "total variance" (TV), defined as the sum of the properly weighted squared residuals in x and y. Alternative "effective variance" (EV) methods project the uncertainty in x into an effective contribution to that in y, and though easier to employ are considered to be less reliable. In the case of a linear response function with both sigma(x) and sigma(y) constant, the EV solutions are identically those from ordinary LS; and Monte Carlo (MC) simulations reveal that they can actually yield smaller root-mean-square errors than the TV method. Furthermore, the biases can be predicted from theory based on inverse regression--x upon y when x is error-free and y is uncertain--which yields a bias factor proportional to the ratio sigma(x)(2)/sigma(xm)(2) of the random-error variance in x to the model variance. The MC simulations confirm that the biases are essentially independent of the error in y, hence correctable. With such bias corrections, the better performance of the EV method in estimating the parameters translates into better performance in estimating the unknown (x(0)) from measurements (y(0)) of its response. The predictability of the EV parameter biases extends also to heteroscedastic y data as long as sigma(x) remains constant, but the estimation of x(0) is not as good in this case. When both x and y are heteroscedastic, there is no known way to predict the biases. However, the MC simulations suggest that for proportional error in x, a geometric x-structure leads to small bias and comparable performance for the EV and TV methods.

  4. A Genetic Algorithm Approach to Nonlinear Least Squares Estimation

    ERIC Educational Resources Information Center

    Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.

    2004-01-01

    A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…

  5. The Least-Squares Estimation of Latent Trait Variables.

    ERIC Educational Resources Information Center

    Tatsuoka, Kikumi

    This paper presents a new method for estimating a given latent trait variable by the least-squares approach. The beta weights are obtained recursively with the help of Fourier series and expressed as functions of item parameters of response curves. The values of the latent trait variable estimated by this method and by maximum likelihood method…

  6. Least-Squares Adaptive Control Using Chebyshev Orthogonal Polynomials

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Burken, John; Ishihara, Abraham

    2011-01-01

    This paper presents a new adaptive control approach using Chebyshev orthogonal polynomials as basis functions in a least-squares functional approximation. The use of orthogonal basis functions improves the function approximation significantly and enables better convergence of parameter estimates. Flight control simulations demonstrate the effectiveness of the proposed adaptive control approach.

  7. Using partial least squares regression to analyze cellular response data.

    PubMed

    Kreeger, Pamela K

    2013-04-16

    This Teaching Resource provides lecture notes, slides, and a problem set for a lecture introducing the mathematical concepts and interpretation of partial least squares regression (PLSR) that were part of a course entitled "Systems Biology: Mammalian Signaling Networks." PLSR is a multivariate regression technique commonly applied to analyze relationships between signaling or transcriptional data and cellular behavior.

  8. Software For Least-Squares And Robust Estimation

    NASA Technical Reports Server (NTRS)

    Jeffreys, William H.; Fitzpatrick, Michael J.; Mcarthur, Barbara E.; Mccartney, James

    1990-01-01

    GAUSSFIT computer program includes full-featured programming language facilitating creation of mathematical models solving least-squares and robust-estimation problems. Programming language designed to make it easy to specify complex reduction models. Written in 100 percent C language.

  9. Neither fixed nor random: weighted least squares meta-analysis.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2015-06-15

    This study challenges two core conventional meta-analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random-effects meta-analysis when there is publication (or small-sample) bias and better than a fixed-effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small-sample) bias and identical to fixed-effect meta-analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed-effect meta-analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects.

  10. On the equivalence of Kalman filtering and least-squares estimation

    NASA Astrophysics Data System (ADS)

    Mysen, E.

    2017-01-01

    The Kalman filter is derived directly from the least-squares estimator, and generalized to accommodate stochastic processes with time variable memory. To complete the link between least-squares estimation and Kalman filtering of first-order Markov processes, a recursive algorithm is presented for the computation of the off-diagonal elements of the a posteriori least-squares error covariance. As a result of the algebraic equivalence of the two estimators, both approaches can fully benefit from the advantages implied by their individual perspectives. In particular, it is shown how Kalman filter solutions can be integrated into the normal equation formalism that is used for intra- and inter-technique combination of space geodetic data.

  11. Least-squares streamline diffusion finite element approximations to singularly perturbed convection-diffusion problems

    SciTech Connect

    Lazarov, R D; Vassilevski, P S

    1999-05-06

    In this paper we introduce and study a least-squares finite element approximation for singularly perturbed convection-diffusion equations of second order. By introducing the flux (diffusive plus convective) as a new unknown, the problem is written in a mixed form as a first order system. Further, the flux is augmented by adding the lower order terms with a small parameter. The new first order system is approximated by the least-squares finite element method using the minus one norm approach of Bramble, Lazarov, and Pasciak [2]. Further, we estimate the error of the method and discuss its implementation and the numerical solution of some test problems.

  12. First-Order System Least-Squares for the Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Bochev, P.; Cai, Z.; Manteuffel, T. A.; McCormick, S. F.

    1996-01-01

    This paper develops a least-squares approach to the solution of the incompressible Navier-Stokes equations in primitive variables. As with our earlier work on Stokes equations, we recast the Navier-Stokes equations as a first-order system by introducing a velocity flux variable and associated curl and trace equations. We show that the resulting system is well-posed, and that an associated least-squares principle yields optimal discretization error estimates in the H(sup 1) norm in each variable (including the velocity flux) and optimal multigrid convergence estimates for the resulting algebraic system.

  13. Source allocation by least-squares hydrocarbon fingerprint matching.

    PubMed

    Burns, William A; Mudge, Stephen M; Bence, A Edward; Boehm, Paul D; Brown, John S; Page, David S; Parker, Keith R

    2006-11-01

    There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS.

  14. Source allocation by least-squares hydrocarbon fingerprint matching

    SciTech Connect

    William A. Burns; Stephen M. Mudge; A. Edward Bence; Paul D. Boehm; John S. Brown; David S. Page; Keith R. Parker

    2006-11-01

    There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS. 15 refs., 5 figs.

  15. Colorimetric characterization of LCD based on constrained least squares

    NASA Astrophysics Data System (ADS)

    LI, Tong; Xie, Kai; Wang, Qiaojie; Yao, Luyang

    2017-01-01

    In order to improve the accuracy of colorimetric characterization of liquid crystal display, tone matrix model in color management modeling of display characterization is established by using constrained least squares for quadratic polynomial fitting, and find the relationship between the RGB color space to CIEXYZ color space; 51 sets of training samples were collected to solve the parameters, and the accuracy of color space mapping model was verified by 100 groups of random verification samples. The experimental results showed that, with the constrained least square method, the accuracy of color mapping was high, the maximum color difference of this model is 3.8895, the average color difference is 1.6689, which prove that the method has better optimization effect on the colorimetric characterization of liquid crystal display.

  16. Least-squares finite element methods for quantum chromodynamics

    SciTech Connect

    Ketelsen, Christian; Brannick, J; Manteuffel, T; Mccormick, S

    2008-01-01

    A significant amount of the computational time in large Monte Carlo simulations of lattice quantum chromodynamics (QCD) is spent inverting the discrete Dirac operator. Unfortunately, traditional covariant finite difference discretizations of the Dirac operator present serious challenges for standard iterative methods. For interesting physical parameters, the discretized operator is large and ill-conditioned, and has random coefficients. More recently, adaptive algebraic multigrid (AMG) methods have been shown to be effective preconditioners for Wilson's discretization of the Dirac equation. This paper presents an alternate discretization of the Dirac operator based on least-squares finite elements. The discretization is systematically developed and physical properties of the resulting matrix system are discussed. Finally, numerical experiments are presented that demonstrate the effectiveness of adaptive smoothed aggregation ({alpha}SA ) multigrid as a preconditioner for the discrete field equations resulting from applying the proposed least-squares FE formulation to a simplified test problem, the 2d Schwinger model of quantum electrodynamics.

  17. Anisotropy minimization via least squares method for transformation optics.

    PubMed

    Junqueira, Mateus A F C; Gabrielli, Lucas H; Spadoti, Danilo H

    2014-07-28

    In this work the least squares method is used to reduce anisotropy in transformation optics technique. To apply the least squares method a power series is added on the coordinate transformation functions. The series coefficients were calculated to reduce the deviations in Cauchy-Riemann equations, which, when satisfied, result in both conformal transformations and isotropic media. We also present a mathematical treatment for the special case of transformation optics to design waveguides. To demonstrate the proposed technique a waveguide with a 30° of bend and with a 50% of increase in its output width was designed. The results show that our technique is simultaneously straightforward to be implement and effective in reducing the anisotropy of the transformation for an extremely low value close to zero.

  18. Application of the Polynomial-Based Least Squares and Total Least Squares Models for the Attenuated Total Reflection Fourier Transform Infrared Spectra of Binary Mixtures of Hydroxyl Compounds.

    PubMed

    Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang

    2016-03-01

    An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS.

  19. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  20. Least-squares finite element method for fluid dynamics

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Povinelli, Louis A.

    1989-01-01

    An overview is given of new developments of the least squares finite element method (LSFEM) in fluid dynamics. Special emphasis is placed on the universality of LSFEM; the symmetry and positiveness of the algebraic systems obtained from LSFEM; the accommodation of LSFEM to equal order interpolations for incompressible viscous flows; and the natural numerical dissipation of LSFEM for convective transport problems and high speed compressible flows. The performance of LSFEM is illustrated by numerical examples.

  1. Software Performance on Nonlinear Least-Squares Problems

    DTIC Science & Technology

    1989-01-01

    Murray, and Wright [1981), Dennis and Schnabel (1983], and Mori and Sorensen [19841. Section 3 reviews the principal approaches that are used in software...2.3.1) where R1, is upper-triangular and nonsingular (see, e. g., Stewart [19731, Chapter 3 ). Gill and Murray alter the Cholesky factorization...problem, JTJo can be used as the initial estimate, provided the columns of Jo are linearly independent. 1 𔃽I 3 . Methods for Nonlinear Least Squares 3.1

  2. Conjugate Gradient Methods for Constrained Least Squares Problems

    DTIC Science & Technology

    1990-01-01

    TINO Hi!AGL . edi"o ar m m Conjugate Gradient Methods for Constrained Least Squares Problems by Douglas James A thesis 3ubmitted to the Graduate Faculty...Methods for Constrained Least uares Problems (directed by Robert J . Plemmons). Nreiw• 1\\ . ’iu 1988, Barlow, Nichols, and Plemmons proposed order...typical). The blocks 13 element j / F __- . F= %.GEi ,,,mo node Figure 2.4: Matrices for Static Structurem Prcblem associated with t’e planar square

  3. Taking correlations in GPS least squares adjustments into account with a diagonal covariance matrix

    NASA Astrophysics Data System (ADS)

    Kermarrec, Gaël; Schön, Steffen

    2016-09-01

    Based on the results of Luati and Proietti (Ann Inst Stat Math 63:673-686, 2011) on an equivalence for a certain class of polynomial regressions between the diagonally weighted least squares (DWLS) and the generalized least squares (GLS) estimator, an alternative way to take correlations into account thanks to a diagonal covariance matrix is presented. The equivalent covariance matrix is much easier to compute than a diagonalization of the covariance matrix via eigenvalue decomposition which also implies a change of the least squares equations. This condensed matrix, for use in the least squares adjustment, can be seen as a diagonal or reduced version of the original matrix, its elements being simply the sums of the rows elements of the weighting matrix. The least squares results obtained with the equivalent diagonal matrices and those given by the fully populated covariance matrix are mathematically strictly equivalent for the mean estimator in terms of estimate and its a priori cofactor matrix. It is shown that this equivalence can be empirically extended to further classes of design matrices such as those used in GPS positioning (single point positioning, precise point positioning or relative positioning with double differences). Applying this new model to simulated time series of correlated observations, a significant reduction of the coordinate differences compared with the solutions computed with the commonly used diagonal elevation-dependent model was reached for the GPS relative positioning with double differences, single point positioning as well as precise point positioning cases. The estimate differences between the equivalent and classical model with fully populated covariance matrix were below the mm for all simulated GPS cases and below the sub-mm for the relative positioning with double differences. These results were confirmed by analyzing real data. Consequently, the equivalent diagonal covariance matrices, compared with the often used elevation

  4. A negative-norm least squares method for Reissner-Mindlin plates

    NASA Astrophysics Data System (ADS)

    Bramble, J. H.; Sun, T.

    1998-07-01

    In this paper a least squares method, using the minus one norm developed by Bramble, Lazarov, and Pasciak, is introduced to approximate the solution of the Reissner-Mindlin plate problem with small parameter t, the thickness of the plate. The reformulation of Brezzi and Fortin is employed to prevent locking. Taking advantage of the least squares approach, we use only continuous finite elements for all the unknowns. In particular, we may use continuous linear finite elements. The difficulty of satisfying the inf-sup condition is overcome by the introduction of a stabilization term into the least squares bilinear form, which is very cheap computationally. It is proved that the error of the discrete solution is optimal with respect to regularity and uniform with respect to the parameter t. Apart from the simplicity of the elements, the stability theorem gives a natural block diagonal preconditioner of the resulting least squares system. For each diagonal block, one only needs a preconditioner for a second order elliptic problem.

  5. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    DOEpatents

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  6. The extended-least-squares treatment of correlated data

    SciTech Connect

    Cohen, E.R.; Tuninsky, V.S.

    1994-12-31

    A generalization of the extended-least-squares algorithms for the case of correlated discrepant data is given. The expressions of the linear, unbiased, minimum-variance estimators (LUMVE) derived before are reformulated. A posteriori estimates of the variance taking into account the inconsistency of all of the experimental data, have the same form as for the case of non-correlated data. These estimates extend the previous improvement on the {open_quotes}traditional{close_quotes} Birge-ratio procedures to the case of correlated input data.

  7. Recursive least squares estimation and Kalman filtering by systolic arrays

    NASA Technical Reports Server (NTRS)

    Chen, M. J.; Yao, K.

    1988-01-01

    One of the most promising new directions for high-throughput-rate problems is that based on systolic arrays. In this paper, using the matrix-decomposition approach, a systolic Kalman filter is formulated as a modified square-root information filter consisting of a whitening filter followed by a simple least-squares operation based on the systolic QR algorithm. By proper skewing of the input data, a fully pipelined time and measurement update systolic Kalman filter can be achieved with O(n squared) processing cells, resulting in a system throughput rate of O (n).

  8. Single directional SMO algorithm for least squares support vector machines.

    PubMed

    Shao, Xigao; Wu, Kun; Liao, Bifeng

    2013-01-01

    Working set selection is a major step in decomposition methods for training least squares support vector machines (LS-SVMs). In this paper, a new technique for the selection of working set in sequential minimal optimization- (SMO-) type decomposition methods is proposed. By the new method, we can select a single direction to achieve the convergence of the optimality condition. A simple asymptotic convergence proof for the new algorithm is given. Experimental comparisons demonstrate that the classification accuracy of the new method is not largely different from the existing methods, but the training speed is faster than existing ones.

  9. Least-squares analysis of the Mueller matrix

    NASA Astrophysics Data System (ADS)

    Reimer, Michael; Yevick, David

    2006-08-01

    In a single-mode fiber excited by light with a fixed polarization state, the output polarizations obtained at two different optical frequencies are related by a Mueller matrix. We examine least-squares procedures for estimating this matrix from repeated measurements of the output Stokes vector for a random set of input polarization states. We then apply these methods to the determination of polarization mode dispersion and polarization-dependent loss in an optical fiber. We find that a relatively simple formalism leads to results that are comparable with those of far more involved techniques.

  10. Constrained least-squares regression in color spaces

    NASA Astrophysics Data System (ADS)

    Finlayson, Graham D.; Drew, Mark S.

    1997-10-01

    To characterize color values measured by color devices (e.g., scanners, color copiers, and color cameras) in a device-independent fashion these values must be transformed to colorimetric tristimulus values. The measured RGB 3- vectors are not a linear transformation away from such colorimetric vectors, however, but still the best transformation between these two data sets, or between RGB values measured under different illuminants, can easily be determined. Two well-known methods for determining this transformation are the simple least-squares fit procedure and Vrhel's principal component method. The former approach makes no a priori statement about which colors will be mapped well and which will be mapped poorly. Depending on the data set a white reflectance may be mapped accurately or inaccurately. In contrast, the principal component method solves for the transform that exactly maps a particular set of basis surfaces between illuminants (where the basis is usually designed to capture the statistics of a set of spectral reflectance data) and hence some statement can be made about which colors will be mapped without error. Unfortunately, even if the basis set fits real reflectances well this does not guarantee good color correction. Here we propose a new, compromise, constrained regression method based on finding the mapping which maps a single (or possibly two) basis surface(s) without error and, subject to this constraint, also minimizes the sum of squared differences between the mapped RGB data and corresponding XYZ tristimuli values. The constrained regression is particularly useful either when it is crucial to map a particular color with great accuracy or when there is incomplete calibration data. For example, it is generally desirable that the device coordinates for a white reflectance should always map exactly to the XYZ tristimulus white. Surprisingly, we show that when no statistics about reflectances are known then a white-point preserving mapping

  11. Revisiting the Least-squares Procedure for Gradient Reconstruction on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Thomas, James L. (Technical Monitor)

    2003-01-01

    The accuracy of the least-squares technique for gradient reconstruction on unstructured meshes is examined. While least-squares techniques produce accurate results on arbitrary isotropic unstructured meshes, serious difficulties exist for highly stretched meshes in the presence of surface curvature. In these situations, gradients are typically under-estimated by up to an order of magnitude. For vertex-based discretizations on triangular and quadrilateral meshes, and cell-centered discretizations on quadrilateral meshes, accuracy can be recovered using an inverse distance weighting in the least-squares construction. For cell-centered discretizations on triangles, both the unweighted and weighted least-squares constructions fail to provide suitable gradient estimates for highly stretched curved meshes. Good overall flow solution accuracy can be retained in spite of poor gradient estimates, due to the presence of flow alignment in exactly the same regions where the poor gradient accuracy is observed. However, the use of entropy fixes has the potential for generating large but subtle discretization errors.

  12. Generalized least-squares fit of multiequation models

    NASA Astrophysics Data System (ADS)

    Marshall, Simon L.; Blencoe, James G.

    2005-01-01

    A method for fitting multiequation models to data sets of finite precision is proposed. This is based on the Gauss-Newton algorithm devised by Britt and Luecke (1973); the inclusion of several equations of condition to be satisfied at each data point results in a block diagonal form for the effective weighting matrix. This method allows generalized nonlinear least-squares fitting of functions that are more easily represented in the parametric form (x(t),y(t)) than as an explicit functional relationship of the form y=f(x). The Aitken (1935) formulas appropriate to multiequation weighted nonlinear least squares are recovered in the limiting case where the variances and covariances of the independent variables are zero. Practical considerations relevant to the performance of such calculations, such as the evaluation of the required partial derivatives and matrix products, are discussed in detail, and the operation of the algorithm is illustrated by applying it to the fit of complex permittivity data to the Debye equation.

  13. Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression.

    PubMed

    Chen, Yanguang

    2016-01-01

    In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson's statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran's index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China's regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test.

  14. Faraday rotation data analysis with least-squares elliptical fitting

    SciTech Connect

    White, Adam D.; McHale, G. Brent; Goerz, David A.; Speer, Ron D.

    2010-10-15

    A method of analyzing Faraday rotation data from pulsed magnetic field measurements is described. The method uses direct least-squares elliptical fitting to measured data. The least-squares fit conic parameters are used to rotate, translate, and rescale the measured data. Interpretation of the transformed data provides improved accuracy and time-resolution characteristics compared with many existing methods of analyzing Faraday rotation data. The method is especially useful when linear birefringence is present at the input or output of the sensing medium, or when the relative angle of the polarizers used in analysis is not aligned with precision; under these circumstances the method is shown to return the analytically correct input signal. The method may be pertinent to other applications where analysis of Lissajous figures is required, such as the velocity interferometer system for any reflector (VISAR) diagnostics. The entire algorithm is fully automated and requires no user interaction. An example of algorithm execution is shown, using data from a fiber-based Faraday rotation sensor on a capacitive discharge experiment.

  15. Cross-term free based bistatic radar system using sparse least squares

    NASA Astrophysics Data System (ADS)

    Sevimli, R. Akin; Cetin, A. Enis

    2015-05-01

    Passive Bistatic Radar (PBR) systems use illuminators of opportunity, such as FM, TV, and DAB broadcasts. The most common illuminator of opportunity used in PBR systems is the FM radio stations. Single FM channel based PBR systems do not have high range resolution and may turn out to be noisy. In order to enhance the range resolution of the PBR systems algorithms using several FM channels at the same time are proposed. In standard methods, consecutive FM channels are translated to baseband as is and fed to the matched filter to compute the range-Doppler map. Multichannel FM based PBR systems have better range resolution than single channel systems. However superious sidelobe peaks occur as a side effect. In this article, we linearly predict the surveillance signal using the modulated and delayed reference signal components. We vary the modulation frequency and the delay to cover the entire range-Doppler plane. Whenever there is a target at a specific range value and Doppler value the prediction error is minimized. The cost function of the linear prediction equation has three components. The first term is the real-part of the ordinary least squares term, the second-term is the imaginary part of the least squares and the third component is the l2-norm of the prediction coefficients. Separate minimization of real and imaginary parts reduces the side lobes and decrease the noise level of the range-Doppler map. The third term enforces the sparse solution on the least squares problem. We experimentally observed that this approach is better than both the standard least squares and other sparse least squares approaches in terms of side lobes. Extensive simulation examples will be presented in the final form of the paper.

  16. Comparison of Total Least Squares and Least Squares for Four- and Seven-parameter Model Coordinate Transformation

    NASA Astrophysics Data System (ADS)

    Wu, You; Liu, Jun; Ge, Hui Yong

    2016-12-01

    Total least squares (TLS) is a technique that solves the traditional least squares (LS) problem for an errors-in-variables (EIV) model, in which both the observation vector and the design matrix are contaminated by random errors. Four- and seven-parameter models of coordinate transformation are typical EIV model. To determine which one of TLS and LS is more effective, taking the four- and seven-parameter models of Global Navigation Satellite System (GNSS) coordinate transformation with different coincidence pointsas examples, the relative effectiveness of the two methods was compared through simulation experiments. The results showed that in the EIV model, the errors-in-variables-only (EIVO) model and the errors-in-observations-only (EIOO) model, TLS is slightly inferior to LS in the four-parameter model coordinate transformation, and TLS is equivalent to LS in the seven-parameter model coordinate transformation. Consequently, in the four- and seven-parameter model coordinate transformation, TLS has no obvious advantage over LS.

  17. RNA structural motif recognition based on least-squares distance.

    PubMed

    Shen, Ying; Wong, Hau-San; Zhang, Shaohong; Zhang, Lin

    2013-09-01

    RNA structural motifs are recurrent structural elements occurring in RNA molecules. RNA structural motif recognition aims to find RNA substructures that are similar to a query motif, and it is important for RNA structure analysis and RNA function prediction. In view of this, we propose a new method known as RNA Structural Motif Recognition based on Least-Squares distance (LS-RSMR) to effectively recognize RNA structural motifs. A test set consisting of five types of RNA structural motifs occurring in Escherichia coli ribosomal RNA is compiled by us. Experiments are conducted for recognizing these five types of motifs. The experimental results fully reveal the superiority of the proposed LS-RSMR compared with four other state-of-the-art methods.

  18. Flow Applications of the Least Squares Finite Element Method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan

    1998-01-01

    The main thrust of the effort has been towards the development, analysis and implementation of the least-squares finite element method (LSFEM) for fluid dynamics and electromagnetics applications. In the past year, there were four major accomplishments: 1) special treatments in computational fluid dynamics and computational electromagnetics, such as upwinding, numerical dissipation, staggered grid, non-equal order elements, operator splitting and preconditioning, edge elements, and vector potential are unnecessary; 2) the analysis of the LSFEM for most partial differential equations can be based on the bounded inverse theorem; 3) the finite difference and finite volume algorithms solve only two Maxwell equations and ignore the divergence equations; and 4) the first numerical simulation of three-dimensional Marangoni-Benard convection was performed using the LSFEM.

  19. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    NASA Astrophysics Data System (ADS)

    Wang, Qiqi; Hu, Rui; Blonigan, Patrick

    2014-06-01

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned "least squares shadowing (LSS) problem". The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

  20. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2016-01-01

    A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

  1. Cognitive assessment in mathematics with the least squares distance method.

    PubMed

    Ma, Lin; Çetin, Emre; Green, Kathy E

    2012-01-01

    This study investigated the validation of comprehensive cognitive attributes of an eighth-grade mathematics test using the least squares distance method and compared performance on attributes by gender and region. A sample of 5,000 students was randomly selected from the data of the 2005 Turkish national mathematics assessment of eighth-grade students. Twenty-five math items were assessed for presence or absence of 20 cognitive attributes (content, cognitive processes, and skill). Four attributes were found to be misspecified or nonpredictive. However, results demonstrated the validity of cognitive attributes in terms of the revised set of 17 attributes. The girls had similar performance on the attributes as the boys. The students from the two eastern regions significantly underperformed on the most attributes.

  2. Local validation of EU-DEM using Least Squares Collocation

    NASA Astrophysics Data System (ADS)

    Ampatzidis, Dimitrios; Mouratidis, Antonios; Gruber, Christian; Kampouris, Vassilios

    2016-04-01

    In the present study we are dealing with the evaluation of the European Digital Elevation Model (EU-DEM) in a limited area, covering few kilometers. We compare EU-DEM derived vertical information against orthometric heights obtained by classical trigonometric leveling for an area located in Northern Greece. We apply several statistical tests and we initially fit a surface model, in order to quantify the existing biases and outliers. Finally, we implement a methodology for orthometric heights prognosis, using the Least Squares Collocation for the remaining residuals of the first step (after the fitted surface application). Our results, taking into account cross validation points, reveal a local consistency between EU-DEM and official heights, which is better than 1.4 meters.

  3. Assessment of column selection systems using Partial Least Squares.

    PubMed

    Žuvela, Petar; Liu, J Jay; Plenis, Alina; Bączek, Tomasz

    2015-11-13

    Column selection systems based on calculation of a scalar measure based on Euclidean distance between chromatographic columns, suffer from the same issue. For diverse values of their parameters, identical or near-identical values can be calculated. Proper use of chemometric methods can not only provide a remedy, but also reveal underlying correlation between them. In this work, parameters of a well-established column selection system (CSS) developed at Katholieke Universiteit Leuven (KUL CSS) have been directly correlated to parameters of selectivity (retention time, resolution, and peak/valley ratio) toward pharmaceuticals, by employing Partial Least Squares (PLS). Two case studies were evaluated, separation of alfuzosin, lamotrigine, and their impurities, respectively. Within them, comprehensive correlation structure was revealed, which was thoroughly interpreted, confirming a causal relationship between KUL parameters and parameters of column performance. Furthermore, it was shown that the developed methodology can be applied to any distance-based column selection system.

  4. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    SciTech Connect

    Wang, Qiqi Hu, Rui Blonigan, Patrick

    2014-06-15

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

  5. Spreadsheet for designing valid least-squares calibrations: A tutorial.

    PubMed

    Bettencourt da Silva, Ricardo J N

    2016-02-01

    Instrumental methods of analysis are used to define the price of goods, the compliance of products with a regulation, or the outcome of fundamental or applied research. These methods can only play their role properly if reported information is objective and their quality is fit for the intended use. If measurement results are reported with an adequately small measurement uncertainty both of these goals are achieved. The evaluation of the measurement uncertainty can be performed by the bottom-up approach, that involves a detailed description of the measurement process, or using a pragmatic top-down approach that quantify major uncertainty components from global performance data. The bottom-up approach is not so frequently used due to the need to master the quantification of individual components responsible for random and systematic effects that affect measurement results. This work presents a tutorial that can be easily used by non-experts in the accurate evaluation of the measurement uncertainty of instrumental methods of analysis calibrated using least-squares regressions. The tutorial includes the definition of the calibration interval, the assessments of instrumental response homoscedasticity, the definition of calibrators preparation procedure required for least-squares regression model application, the assessment of instrumental response linearity and the evaluation of measurement uncertainty. The developed measurement model is only applicable in calibration ranges where signal precision is constant. A MS-Excel file is made available to allow the easy application of the tutorial. This tool can be useful for cases where top-down approaches cannot produce results with adequately low measurement uncertainty. An example of the application of this tool to the determination of nitrate in water by ion chromatography is presented.

  6. On the method of least squares. II. [for calculation of covariance matrices and optimization algorithms

    NASA Technical Reports Server (NTRS)

    Jefferys, W. H.

    1981-01-01

    A least squares method proposed previously for solving a general class of problems is expanded in two ways. First, covariance matrices related to the solution are calculated and their interpretation is given. Second, improved methods of solving the normal equations related to those of Marquardt (1963) and Fletcher and Powell (1963) are developed for this approach. These methods may converge in cases where Newton's method diverges or converges slowly.

  7. Least squares adjustment of large-scale geodetic networks by orthogonal decomposition

    SciTech Connect

    George, J.A.; Golub, G.H.; Heath, M.T.; Plemmons, R.J.

    1981-11-01

    This article reviews some recent developments in the solution of large sparse least squares problems typical of those arising in geodetic adjustment problems. The new methods are distinguished by their use of orthogonal transformations which tend to improve numerical accuracy over the conventional approach based on the use of the normal equations. The adaptation of these new schemes to allow for the use of auxiliary storage and their extension to rank deficient problems are also described.

  8. The incomplete inverse and its applications to the linear least squares problem

    NASA Technical Reports Server (NTRS)

    Morduch, G. E.

    1977-01-01

    A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.

  9. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS

    SciTech Connect

    Hong Luo; Yidong Xia; Robert Nourgaliev

    2011-05-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.

  10. Accuracy of least-squares methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bochev, Pavel B.; Gunzburger, Max D.

    1993-01-01

    Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.

  11. Recursive least squares background prediction of univariate syndromic surveillance data

    PubMed Central

    2009-01-01

    Background Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Methods Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. Results We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. Conclusion The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold detection algorithm from the

  12. Least-Squares Neutron Spectral Adjustment with STAYSL PNNL

    NASA Astrophysics Data System (ADS)

    Greenwood, L. R.; Johnson, C. D.

    2016-02-01

    The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002) [3], which was sponsored by the International Atomic Energy Agency (IAEA), though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF) [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up) within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the SigPhi Calculator

  13. Least-Squares Data Adjustment with Rank-Deficient Data Covariance Matrices

    SciTech Connect

    Williams, J.G.

    2011-07-01

    A derivation of the linear least-squares adjustment formulae is required that avoids the assumption that the covariance matrix of prior parameters can be inverted. Possible proofs are of several kinds, including: (i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. In this paper, the least-squares adjustment equations are derived in both these ways, while explicitly assuming that the covariance matrix of prior parameters is singular. It will be proved that the solutions are unique and that, contrary to statements that have appeared in the literature, the least-squares adjustment problem is not ill-posed. No modification is required to the adjustment formulae that have been used in the past in the case of a singular covariance matrix for the priors. In conclusion: The linear least-squares adjustment formula that has been used in the past is valid in the case of a singular covariance matrix for the covariance matrix of prior parameters. Furthermore, it provides a unique solution. Statements in the literature, to the effect that the problem is ill-posed are wrong. No regularization of the problem is required. This has been proved in the present paper by two methods, while explicitly assuming that the covariance matrix of prior parameters is singular: i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. No modification is needed to the adjustment formulae that have been used in the past. (author)

  14. Analysis of Adaptive Mesh Refinement for IMEX Discontinuous Galerkin Solutions of the Compressible Euler Equations with Application to Atmospheric Simulations

    DTIC Science & Technology

    2013-01-01

    Analysis of Adaptive Mesh Refinement for IMEX Discontinuous Galerkin Solutions of the Compressible Euler Equations with Application to Atmospheric...order discontinuous Galerkin method on quadrilateral grids with non-conforming elements. We perform a detailed analysis of the cost of AMR by comparing...adaptive mesh refinement, discontinuous Galerkin method, non-conforming mesh, IMEX, compressible Euler equations, atmospheric simulations 1. Introduction

  15. Neither fixed nor random: weighted least squares meta-regression.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2017-03-01

    Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Suppressing Anomalous Localized Waffle Behavior in Least Squares Wavefront Reconstructors

    SciTech Connect

    Gavel, D

    2002-10-08

    A major difficulty with wavefront slope sensors is their insensitivity to certain phase aberration patterns, the classic example being the waffle pattern in the Fried sampling geometry. As the number of degrees of freedom in AO systems grows larger, the possibility of troublesome waffle-like behavior over localized portions of the aperture is becoming evident. Reconstructor matrices have associated with them, either explicitly or implicitly, an orthogonal mode space over which they operate, called the singular mode space. If not properly preconditioned, the reconstructor's mode set can consist almost entirely of modes that each have some localized waffle-like behavior. In this paper we analyze the behavior of least-squares reconstructors with regard to their mode spaces. We introduce a new technique that is successful in producing a mode space that segregates the waffle-like behavior into a few ''high order'' modes, which can then be projected out of the reconstructor matrix. This technique can be adapted so as to remove any specific modes that are undesirable in the final reconstructor (such as piston, tip, and tilt for example) as well as suppress (the more nebulously defined) localized waffle behavior.

  17. Battery state-of-charge estimation using approximate least squares

    NASA Astrophysics Data System (ADS)

    Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.

    2015-03-01

    In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.

  18. Non-parametric and least squares Langley plot methods

    NASA Astrophysics Data System (ADS)

    Kiedron, P. W.; Michalsky, J. J.

    2016-01-01

    Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V = V0e-τ ṡ m, where a plot of ln(V) voltage vs. m air mass yields a straight line with intercept ln(V0). This ln(V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The 11 techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln(V0)'s are smoothed and interpolated with median and mean moving window filters.

  19. River flow time series using least squares support vector machines

    NASA Astrophysics Data System (ADS)

    Samsudin, R.; Saad, P.; Shabri, A.

    2011-06-01

    This paper proposes a novel hybrid forecasting model known as GLSSVM, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM). The GMDH is used to determine the useful input variables which work as the time series forecasting for the LSSVM model. Monthly river flow data from two stations, the Selangor and Bernam rivers in Selangor state of Peninsular Malaysia were taken into consideration in the development of this hybrid model. The performance of this model was compared with the conventional artificial neural network (ANN) models, Autoregressive Integrated Moving Average (ARIMA), GMDH and LSSVM models using the long term observations of monthly river flow discharge. The root mean square error (RMSE) and coefficient of correlation (R) are used to evaluate the models' performances. In both cases, the new hybrid model has been found to provide more accurate flow forecasts compared to the other models. The results of the comparison indicate that the new hybrid model is a useful tool and a promising new method for river flow forecasting.

  20. Non-parametric and least squares Langley plot methods

    NASA Astrophysics Data System (ADS)

    Kiedron, P. W.; Michalsky, J. J.

    2015-04-01

    Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V=V>/i>0e-τ ·m, where a plot of ln (V) voltage vs. m air mass yields a straight line with intercept ln (V0). This ln (V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The eleven techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln (V0)'s are smoothed and interpolated with median and mean moving window filters.

  1. Prediction of solubility parameters using partial least square regression.

    PubMed

    Tantishaiyakul, Vimon; Worakul, Nimit; Wongpoowarak, Wibul

    2006-11-15

    The total solubility parameter (delta) values were effectively predicted by using computed molecular descriptors and multivariate partial least squares (PLS) statistics. The molecular descriptors in the derived models included heat of formation, dipole moment, molar refractivity, solvent-accessible surface area (SA), surface-bounded molecular volume (SV), unsaturated index (Ui), and hydrophilic index (Hy). The values of these descriptors were computed by the use of HyperChem 7.5, QSPR Properties module in HyperChem 7.5, and Dragon Web version. The other two descriptors, hydrogen bonding donor (HD), and hydrogen bond-forming ability (HB) were also included in the models. The final reduced model of the whole data set had R(2) of 0.853, Q(2) of 0.813, root mean squared error from the cross-validation of the training set (RMSEcv(tr)) of 2.096 and RMSE of calibration (RMSE(tr)) of 1.857. No outlier was observed from this data set of 51 diverse compounds. Additionally, the predictive power of the developed model was comparable to the well recognized systems of Hansen, van Krevelen and Hoftyzer, and Hoy.

  2. A duct mapping method using least squares support vector machines

    NASA Astrophysics Data System (ADS)

    Douvenot, RéMi; Fabbro, Vincent; Gerstoft, Peter; Bourlier, Christophe; Saillard, Joseph

    2008-12-01

    This paper introduces a "refractivity from clutter" (RFC) approach with an inversion method based on a pregenerated database. The RFC method exploits the information contained in the radar sea clutter return to estimate the refractive index profile. Whereas initial efforts are based on algorithms giving a good accuracy involving high computational needs, the present method is based on a learning machine algorithm in order to obtain a real-time system. This paper shows the feasibility of a RFC technique based on the least squares support vector machine inversion method by comparing it to a genetic algorithm on simulated and noise-free data, at 1 and 5 GHz. These data are simulated in the presence of ideal trilinear surface-based ducts. The learning machine is based on a pregenerated database computed using Latin hypercube sampling to improve the efficiency of the learning. The results show that little accuracy is lost compared to a genetic algorithm approach. The computational time of a genetic algorithm is very high, whereas the learning machine approach is real time. The advantage of a real-time RFC system is that it could work on several azimuths in near real time.

  3. Bootstrapping least-squares estimates in biochemical reaction networks.

    PubMed

    Linder, Daniel F; Rempała, Grzegorz A

    2015-01-01

    The paper proposes new computational methods of computing confidence bounds for the least-squares estimates (LSEs) of rate constants in mass action biochemical reaction network and stochastic epidemic models. Such LSEs are obtained by fitting the set of deterministic ordinary differential equations (ODEs), corresponding to the large-volume limit of a reaction network, to network's partially observed trajectory treated as a continuous-time, pure jump Markov process. In the large-volume limit the LSEs are asymptotically Gaussian, but their limiting covariance structure is complicated since it is described by a set of nonlinear ODEs which are often ill-conditioned and numerically unstable. The current paper considers two bootstrap Monte-Carlo procedures, based on the diffusion and linear noise approximations for pure jump processes, which allow one to avoid solving the limiting covariance ODEs. The results are illustrated with both in-silico and real data examples from the LINE 1 gene retrotranscription model and compared with those obtained using other methods.

  4. Generalized total least squares prediction algorithm for universal 3D similarity transformation

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Li, Jiancheng; Liu, Chao; Yu, Jie

    2017-02-01

    Three-dimensional (3D) similarity datum transformation is extensively applied to transform coordinates from GNSS-based datum to a local coordinate system. Recently, some total least squares (TLS) algorithms have been successfully developed to solve the universal 3D similarity transformation problem (probably with big rotation angles and an arbitrary scale ratio). However, their procedures of the parameter estimation and new point (non-common point) transformation were implemented separately, and the statistical correlation which often exists between the common and new points in the original coordinate system was not considered. In this contribution, a generalized total least squares prediction (GTLSP) algorithm, which implements the parameter estimation and new point transformation synthetically, is proposed. All of the random errors in the original and target coordinates, and their variance-covariance information will be considered. The 3D transformation model in this case is abstracted as a kind of generalized errors-in-variables (EIV) model and the equation for new point transformation is incorporated into the functional model as well. Then the iterative solution is derived based on the Gauss-Newton approach of nonlinear least squares. The performance of GTLSP algorithm is verified in terms of a simulated experiment, and the results show that GTLSP algorithm can improve the statistical accuracy of the transformed coordinates compared with the existing TLS algorithms for 3D similarity transformation.

  5. Fast Dating Using Least-Squares Criteria and Algorithms.

    PubMed

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that

  6. Least-squares reverse time migration in elastic media

    NASA Astrophysics Data System (ADS)

    Ren, Zhiming; Liu, Yang; Sen, Mrinal K.

    2017-02-01

    Elastic reverse time migration (RTM) can yield accurate subsurface information (e.g. PP and PS reflectivity) by imaging the multicomponent seismic data. However, the existing RTM methods are still insufficient to provide satisfactory results because of the finite recording aperture, limited bandwidth and imperfect illumination. Besides, the P- and S-wave separation and the polarity reversal correction are indispensable in conventional elastic RTM. Here, we propose an iterative elastic least-squares RTM (LSRTM) method, in which the imaging accuracy is improved gradually with iteration. We first use the Born approximation to formulate the elastic de-migration operator, and employ the Lagrange multiplier method to derive the adjoint equations and gradients with respect to reflectivity. Then, an efficient inversion workflow (only four forward computations needed in each iteration) is introduced to update the reflectivity. Synthetic and field data examples reveal that the proposed LSRTM method can obtain higher-quality images than the conventional elastic RTM. We also analyse the influence of model parametrizations and misfit functions in elastic LSRTM. We observe that Lamé parameters, velocity and impedance parametrizations have similar and plausible migration results when the structures of different models are correlated. For an uncorrelated subsurface model, velocity and impedance parametrizations produce fewer artefacts caused by parameter crosstalk than the Lamé coefficient parametrization. Correlation- and convolution-type misfit functions are effective when amplitude errors are involved and the source wavelet is unknown, respectively. Finally, we discuss the dependence of elastic LSRTM on migration velocities and its antinoise ability. Imaging results determine that the new elastic LSRTM method performs well as long as the low-frequency components of migration velocities are correct. The quality of images of elastic LSRTM degrades with increasing noise.

  7. Least-squares reverse time migration in elastic media

    NASA Astrophysics Data System (ADS)

    Ren, Zhiming; Liu, Yang; Sen, Mrinal K.

    2016-11-01

    Elastic reverse time migration (RTM) can yield more subsurface information (e.g. PP and PS reflectivity) by imaging the multi-component seismic data. However, the existing RTM methods are still insufficient to provide satisfactory results because of the finite recording aperture, limited bandwidth and imperfect illumination. Besides, the P- and S-wave separation and the polarity reversal correction are indispensable in conventional elastic RTM. Here, we propose an iterative elastic least-squares RTM (LSRTM) method, in which the imaging accuracy is improved gradually with iteration. We first use the Born approximation to formulate the elastic de-migration operator, and employ the Lagrange multiplier method to derive the adjoint equations and gradients with respect to reflectivity. Then, an efficient inversion workflow (only four forward computations needed in each iteration) is introduced to update the reflectivity. Synthetic and field data examples reveal that the proposed LSRTM method can obtain higher-quality images than the conventional elastic RTM. We also analyze the influence of model parameterizations and misfit functions in elastic LSRTM. We observe that Lamé parameters, velocity and impedance parameterizations have similar and plausible migration results when the structures of different models are correlated. For an uncorrelated subsurface model, velocity and impedance parameterizations produce fewer artifacts caused by parameter crosstalk than the Lamé coefficient parameterization. Correlation- and convolution-type misfit functions are effective when amplitude errors are involved and the source wavelet is unknown, respectively. Finally, we discuss the dependence of elastic LSRTM on migration velocities and its anti-noise ability. Imaging results determine that the new elastic LSRTM method performs well as long as the low-frequency components of migration velocities are correct. The quality of images of elastic LSRTM degrades with increasing noise.

  8. Finding A Minimally Informative Dirichlet Prior Using Least Squares

    SciTech Connect

    Dana Kelly

    2011-03-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson \\lambda, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  9. Finding a Minimally Informative Dirichlet Prior Distribution Using Least Squares

    SciTech Connect

    Dana Kelly; Corwin Atwood

    2011-03-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straight-forward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in closed form, and so an approximate beta distribution is used in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial aleatory model for common-cause failure, must be estimated from data that is often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  10. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  11. Simulation of nanoparticle transport in airways using Petrov-Galerkin finite element methods.

    PubMed

    Rajaraman, Prathish; Heys, Jeffrey J

    2014-01-01

    The transport and deposition properties of nanoparticles with a range of aerodynamic diameters ( 1 nm ≤ d ≤ 150 nm) were studied for the human airways. A finite element code was developed that solved both the Navier-Stokes and advection-diffusion equations monolithically. When modeling nanoparticle transport in the airways, the finite element method becomes unstable, and, in order resolve this issue, various stabilization methods were considered in terms of accuracy and computational cost. The stabilization methods considered here include the streamline upwind, streamline upwind Petrov-Galerkin, and Galerkin least squares approaches. In order to compare the various stabilization approaches, the approximate solution from each stabilization approach was compared to the analytical Graetz solution, which is a model for monodispersed, dilute particle transport in a straight cylinder. The optimal stabilization method, especially with regard to accuracy, was found to be the Galerkin least squares approach for the Graetz problem when the Péclet number was larger than 10(4). In the human airways geometry, the Galerkin least squares stabilization approach once more provided the most accurate approximate solution for particles with an aerodynamic diameter of 10 nm or larger, but mesh size had a much greater effect on accuracy than the choice of stabilization method. The choice of stabilization method had a greater impact than mesh size for particles with an aerodynamic diameter 10 nm or smaller, but the most accurate stabilization method was streamline upwind Petrov-Galerkin in these cases.

  12. Analyzing industrial energy use through ordinary least squares regression models

    NASA Astrophysics Data System (ADS)

    Golden, Allyson Katherine

    Extensive research has been performed using regression analysis and calibrated simulations to create baseline energy consumption models for residential buildings and commercial institutions. However, few attempts have been made to discuss the applicability of these methodologies to establish baseline energy consumption models for industrial manufacturing facilities. In the few studies of industrial facilities, the presented linear change-point and degree-day regression analyses illustrate ideal cases. It follows that there is a need in the established literature to discuss the methodologies and to determine their applicability for establishing baseline energy consumption models of industrial manufacturing facilities. The thesis determines the effectiveness of simple inverse linear statistical regression models when establishing baseline energy consumption models for industrial manufacturing facilities. Ordinary least squares change-point and degree-day regression methods are used to create baseline energy consumption models for nine different case studies of industrial manufacturing facilities located in the southeastern United States. The influence of ambient dry-bulb temperature and production on total facility energy consumption is observed. The energy consumption behavior of industrial manufacturing facilities is only sometimes sufficiently explained by temperature, production, or a combination of the two variables. This thesis also provides methods for generating baseline energy models that are straightforward and accessible to anyone in the industrial manufacturing community. The methods outlined in this thesis may be easily replicated by anyone that possesses basic spreadsheet software and general knowledge of the relationship between energy consumption and weather, production, or other influential variables. With the help of simple inverse linear regression models, industrial manufacturing facilities may better understand their energy consumption and

  13. Effects of nonlinearities and uncorrelated or correlated errors in realistic simulated data on the prediction abilities of augmented classical least squares and partial least squares.

    PubMed

    Melgaard, David K; Haaland, David M

    2004-09-01

    Comparisons of prediction models from the new augmented classical least squares (ACLS) and partial least squares (PLS) multivariate spectral analysis methods were conducted using simulated data containing deviations from the idealized model. The simulated data were based on pure spectral components derived from real near-infrared spectra of multicomponent dilute aqueous solutions. Simulated uncorrelated concentration errors, uncorrelated and correlated spectral noise, and nonlinear spectral responses were included to evaluate the methods on situations representative of experimental data. The statistical significance of differences in prediction ability was evaluated using the Wilcoxon signed rank test. The prediction differences were found to be dependent on the type of noise added, the numbers of calibration samples, and the component being predicted. For analyses applied to simulated spectra with noise-free nonlinear response, PLS was shown to be statistically superior to ACLS for most of the cases. With added uncorrelated spectral noise, both methods performed comparably. Using 50 calibration samples with simulated correlated spectral noise, PLS showed an advantage in 3 out of 9 cases, but the advantage dropped to 1 out of 9 cases with 25 calibration samples. For cases with different noise distributions between calibration and validation, ACLS predictions were statistically better than PLS for two of the four components. Also, when experimentally derived correlated spectral error was added, ACLS gave better predictions that were statistically significant in 15 out of 24 cases simulated. On data sets with nonuniform noise, neither method was statistically better, although ACLS usually had smaller standard errors of prediction (SEPs). The varying results emphasize the need to use realistic simulations when making comparisons between various multivariate calibration methods. Even when the differences between the standard error of predictions were statistically

  14. SPARSE REPRESENTATIONS WITH DATA FIDELITY TERM VIA AN ITERATIVELY REWEIGHTED LEAST SQUARES ALGORITHM

    SciTech Connect

    WOHLBERG, BRENDT; RODRIGUEZ, PAUL

    2007-01-08

    Basis Pursuit and Basis Pursuit Denoising, well established techniques for computing sparse representations, minimize an {ell}{sup 2} data fidelity term subject to an {ell}{sup 1} sparsity constraint or regularization term on the solution by mapping the problem to a linear or quadratic program. Basis Pursuit Denoising with an {ell}{sup 1} data fidelity term has recently been proposed, also implemented via a mapping to a linear program. They introduce an alternative approach via an iteratively Reweighted Least Squares algorithm, providing greater flexibility in the choice of data fidelity term norm, and computational advantages in certain circumstances.

  15. Optimization of absorption placement using geometrical acoustic models and least squares.

    PubMed

    Saksela, Kai; Botts, Jonathan; Savioja, Lauri

    2015-04-01

    Given a geometrical model of a space, the problem of optimally placing absorption in a space to match a desired impulse response is in general nonlinear. This has led some to use costly optimization procedures. This letter reformulates absorption assignment as a constrained linear least-squares problem. Regularized solutions result in direct distribution of absorption in the room and can accommodate multiple frequency bands, multiple sources and receivers, and constraints on geometrical placement of absorption. The method is demonstrated using a beam tracing model, resulting in the optimal absorption placement on the walls and ceiling of a classroom.

  16. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-01-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  17. Stable least-squares matching for oblique images using bound constrained optimization and a robust loss function

    NASA Astrophysics Data System (ADS)

    Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Xie, Linfu; Chen, Min

    2016-08-01

    Least-squares matching is a standard procedure in photogrammetric applications for obtaining sub-pixel accuracies of image correspondences. However, least-squares matching has also been criticized for its instability, which is primarily reflected by the requests for the initial correspondence and favorable image quality. In image matching between oblique images, due to the blur, illumination differences and other effects, the image attributes of different views are notably different, which results in a more severe convergence problem. Aiming at improving the convergence rate and robustness of least-squares matching of oblique images, we incorporated prior geometric knowledge in the optimization process, which is reflected as the bounded constraints on the optimizing parameters that constrain the search for a solution to a reasonable region. Furthermore, to be resilient to outliers, we substituted the square loss with a robust loss function. To solve the composite problem, we reformulated the least-squares matching problem as a bound constrained optimization problem, which can be solved with bounds constrained Levenberg-Marquardt solver. Experimental results consisting of images from two different penta-view oblique camera systems confirmed that the proposed method shows guaranteed final convergences in various scenarios compared to the approximately 20-50% convergence rate of classical least-squares matching.

  18. Partitionability of Implicit Least Squares Model Fitting Problems

    DTIC Science & Technology

    1981-05-01

    PROBLEM ......... .................. 5 3. ITERATION ALGORITHMS AND EFFECTS OF DATA PERTURBATIONS. . . 9 4. PARTITIONABILITY...equations can be used to analyze the sensitivity of the solution to data perturbations, and to obtain numerical values of optimal residuals and parameters...However, the numerical solution of the system is not necessarily trivial, because the size of the system is proportional to the number of data

  19. Sparse partial least squares regression for simultaneous dimension reduction and variable selection.

    PubMed

    Chun, Hyonho; Keleş, Sündüz

    2010-01-01

    Partial least squares regression has been an alternative to ordinary least squares for handling multicollinearity in several areas of scientific research since the 1960s. It has recently gained much attention in the analysis of high dimensional genomic data. We show that known asymptotic consistency of the partial least squares estimator for a univariate response does not hold with the very large p and small n paradigm. We derive a similar result for a multivariate response regression with partial least squares. We then propose a sparse partial least squares formulation which aims simultaneously to achieve good predictive performance and variable selection by producing sparse linear combinations of the original predictors. We provide an efficient implementation of sparse partial least squares regression and compare it with well-known variable selection and dimension reduction approaches via simulation experiments. We illustrate the practical utility of sparse partial least squares regression in a joint analysis of gene expression and genomewide binding data.

  20. Optimal Least-Squares Unidimensional Scaling: Improved Branch-and-Bound Procedures and Comparison to Dynamic Programming

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Stahl, Stephanie

    2005-01-01

    There are two well-known methods for obtaining a guaranteed globally optimal solution to the problem of least-squares unidimensional scaling of a symmetric dissimilarity matrix: (a) dynamic programming, and (b) branch-and-bound. Dynamic programming is generally more efficient than branch-and-bound, but the former is limited to matrices with…

  1. Evaluation of TDRSS-user orbit determination accuracy using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Hodjatzadeh, M.; Samii, M. V.; Doll, C. E.; Hart, R. C.; Mistretta, G. D.

    1991-01-01

    The development of the Real-Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination on a Disk Operating System (DOS) based Personal Computer (PC) is addressed. The results of a study to compare the orbit determination accuracy of a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOD/E with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), is addressed. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for the Earth Radiation Budget Satellite (ERBS); the maximum solution differences were less than 25 m after the filter had reached steady state.

  2. Models of spectral unmixing: simplex versus least squares method of resolution

    NASA Astrophysics Data System (ADS)

    Lavreau, Johan

    1995-01-01

    Spectral unmixing is referred to in textbooks as a straightforward technique the application of which encounters apparently no problem. Operational applications are however scarce in the literature. The method usually used is based on the least square method of minimizing the error in search of the best fit solution. This method, however, poses problems when applied to real data when the number of end-members increases and/or the composition of end-members is similar. An alternative method based on linear algebra has several advantages: (1) no inversion of matrix is required, no meaningless values are thus generated; (2) not only a condition of the closed system can be introduced, but the end-members remain independent (i.e., the last one is not the complement to 1 of the sum of the other, as in the least square method); (3) a condition of positive value of the weights can be imposed. The latter condition yields a supplementary equation to the system, one more end-member may be taken into account, thus improving both the qualitative and the quantitative aspects of the mixture problem. Examples based on Landsat TM imagery are shown in the fields of vegetation monitoring (subtraction of the vegetal component in the landscape) and spectral geology in arid terrains (end-members being defined through a principal components analysis of the image).

  3. Least Squares Shadowing Sensitivity Analysis of Chaotic and Turbulent Fluid Flows

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Wang, Qiqi; Gomez, Steven

    2013-11-01

    Computational methods for sensitivity analysis are invaluable tools for fluid dynamics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in chaotic fluid flow fields, such as those obtained using high-fidelity turbulence simulations. This break down is due to the ``Butterfly Effect'' the high sensitivity of chaotic dynamical systems to the initial condition. A new sensitivity analysis method developed by the authors, Least Squares Shadowing (LSS), can compute useful and accurate gradients for quantities of interest in chaotic and turbulent fluid flows. LSS computes gradients using the ``shadow trajectory,'' a phase space trajectory (or solution) for which perturbations to the flow field do not grow exponentially in time. This talk will outline Least Squares Shadowing and demonstrate it on several chaotic and turbulent fluid flows, including homogeneous isotropic turbulence, Rayleigh-Bénard convection and turbulent channel flow. We would like to acknowledge AFSOR Award F11B-T06-0007 under Dr. Fariba Fahroo, NASA Award NNH11ZEA001N under Dr. Harold Atkins, as well as financial support from ConocoPhillips, the NDSEG fellowship and the ANSYS Fellowship.

  4. Using Perturbed QR Factorizations To Solve Linear Least-Squares Problems

    SciTech Connect

    Avron, Haim; Ng, Esmond G.; Toledo, Sivan

    2008-03-21

    We propose and analyze a new tool to help solve sparse linear least-squares problems min{sub x} {parallel}Ax-b{parallel}{sub 2}. Our method is based on a sparse QR factorization of a low-rank perturbation {cflx A} of A. More precisely, we show that the R factor of {cflx A} is an effective preconditioner for the least-squares problem min{sub x} {parallel}Ax-b{parallel}{sub 2}, when solved using LSQR. We propose applications for the new technique. When A is rank deficient we can add rows to ensure that the preconditioner is well-conditioned without column pivoting. When A is sparse except for a few dense rows we can drop these dense rows from A to obtain {cflx A}. Another application is solving an updated or downdated problem. If R is a good preconditioner for the original problem A, it is a good preconditioner for the updated/downdated problem {cflx A}. We can also solve what-if scenarios, where we want to find the solution if a column of the original matrix is changed/removed. We present a spectral theory that analyzes the generalized spectrum of the pencil (A*A,R*R) and analyze the applications.

  5. Simultaneous spectrophotometric determination of four metals by two kinds of partial least squares methods

    NASA Astrophysics Data System (ADS)

    Gao, Ling; Ren, Shouxin

    2005-10-01

    Simultaneous determination of Ni(II), Cd(II), Cu(II) and Zn(II) was studied by two methods, kernel partial least squares (KPLS) and wavelet packet transform partial least squares (WPTPLS), with xylenol orange and cetyltrimethyl ammonium bromide as reagents in the medium pH = 9.22 borax-hydrochloric acid buffer solution. Two programs, PKPLS and PWPTPLS, were designed to perform the calculations. Data reduction was performed using kernel matrices and wavelet packet transform, respectively. In the KPLS method, the size of the kernel matrix is only dependent on the number of samples, thus the method was suitable for the data matrix with many wavelengths and fewer samples. Wavelet packet representations of signals provide a local time-frequency description, thus in the wavelet packet domain, the quality of the noise removal can be improved. In the WPTPLS by optimization, wavelet function and decomposition level were selected as Daubeches 12 and 5, respectively. Experimental results showed both methods to be successful even where there was severe overlap of spectra.

  6. On the decoding of intracranial data using sparse orthonormalized partial least squares

    NASA Astrophysics Data System (ADS)

    van Gerven, Marcel A. J.; Chao, Zenas C.; Heskes, Tom

    2012-04-01

    It has recently been shown that robust decoding of motor output from electrocorticogram signals in monkeys over prolonged periods of time has become feasible (Chao et al 2010 Front. Neuroeng. 3 1-10 ). In order to achieve these results, multivariate partial least-squares (PLS) regression was used. PLS uses a set of latent variables, referred to as components, to model the relationship between the input and the output data and is known to handle high-dimensional and possibly strongly correlated inputs and outputs well. We developed a new decoding method called sparse orthonormalized partial least squares (SOPLS) which was tested on a subset of the data used in Chao et al (2010) (freely obtainable from neurotycho.org (Nagasaka et al 2011 PLoS ONE 6 e22561)). We show that SOPLS reaches the same decoding performance as PLS using just two sparse components which can each be interpreted as encoding particular combinations of motor parameters. Furthermore, the sparse solution afforded by the SOPLS model allowed us to show the functional involvement of beta and gamma band responses in premotor and motor cortex for predicting the first component. Based on the literature, we conjecture that this first component is involved in the encoding of movement direction. Hence, the sparse and compact representation afforded by the SOPLS model facilitates interpretation of which spectral, spatial and temporal components are involved in successful decoding. These advantages make the proposed decoding method an important new tool in neuroprosthetics.

  7. Fracture characterization by hybrid enumerative search and Gauss-Newton least-squares inversion methods

    NASA Astrophysics Data System (ADS)

    Alkharji, Mohammed N.

    Most fracture characterization methods provide a general description of the fracture parameters as part of the reservoirs parameters; the fracture interaction and geometry within the reservoir is given less attention. T-Matrix and Linear Slip effective medium fracture models are implemented to invert the elastic tensor for the parameters and geometries of the fractures within the reservoir. The fracture inverse problem has an ill-posed, overdetermined, underconstrained rank-deficit system of equations. Least-squares inverse methods are used to solve the problem. A good starting initial model for the parameters is a key factor in the reliability of the inversion. Most methods assume that the starting parameters are close to the solution to avoid inaccurate local minimum solutions. The prior knowledge of the fracture parameters and their geometry is not available. We develop a hybrid, enumerative and Gauss-Newton, method that estimates the fracture parameters and geometry from the elastic tensor with no prior knowledge of the initial parameter values. The fracture parameters are separated into two groups. The first group contains the fracture parameters with no prior information, and the second group contains the parameters with known prior information. Different models are generated from the first group parameters by sampling the solution space over a predefined range of possible solutions for each parameter. Each model generated by the first group is fixed and used as a starting model to invert for the second group of parameters using the Gauss-Newton method. The least-squares residual between the observed elastic tensor and the estimated elastic tensor is calculated for each model. The model parameters that yield the least-squares residual corresponds to the correct fracture reservoir parameters and geometry. Two synthetic examples of fractured reservoirs with oil and gas saturations were inverted with no prior information about the fracture properties. The

  8. FOSLS (first-order systems least squares): An overivew

    SciTech Connect

    Manteuffel, T.A.

    1996-12-31

    The process of modeling a physical system involves creating a mathematical model, forming a discrete approximation, and solving the resulting linear or nonlinear system. The mathematical model may take many forms. The particular form chosen may greatly influence the ease and accuracy with which it may be discretized as well as the properties of the resulting linear or nonlinear system. If a model is chosen incorrectly it may yield linear systems with undesirable properties such as nonsymmetry or indefiniteness. On the other hand, if the model is designed with the discretization process and numerical solution in mind, it may be possible to avoid these undesirable properties.

  9. Recursive least-squares algorithms for fast discrete frequency domain equalization

    NASA Astrophysics Data System (ADS)

    Picchi, G.; Prati, G.

    A simple least-squares initialization algorithm (IA) is defined for use with a self-orthogonalizing equalization algorithm in the discrete frequency domain (DFD). A parallel recursive relation is formulated for updating the Kalman vector in the Kalman/Godard algorithm. The DFD is shown to be a modified LS algorithm, thus permitting an exact solution of the LS problem during the equalizer fill-up stage when the data correlation matrix is singular. The solution to the LS problem provides a basis for initialization of the DFD equalizer coefficients. The results of a simulation of on-line initialization of a DFD equalizer with a recursive initialization algorithm demonstrate a weighting capability that minimizes the effects of mean square errors of poorly estimated small-value taps.

  10. First-order system least squares for the pure traction problem in planar linear elasticity

    SciTech Connect

    Cai, Z.; Manteuffel, T.; McCormick, S.; Parter, S.

    1996-12-31

    This talk will develop two first-order system least squares (FOSLS) approaches for the solution of the pure traction problem in planar linear elasticity. Both are two-stage algorithms that first solve for the gradients of displacement, then for the displacement itself. One approach, which uses L{sup 2} norms to define the FOSLS functional, is shown under certain H{sup 2} regularity assumptions to admit optimal H{sup 1}-like performance for standard finite element discretization and standard multigrid solution methods that is uniform in the Poisson ratio for all variables. The second approach, which is based on H{sup -1} norms, is shown under general assumptions to admit optimal uniform performance for displacement flux in an L{sup 2} norm and for displacement in an H{sup 1} norm. These methods do not degrade as other methods generally do when the material properties approach the incompressible limit.

  11. Algorithm 937: MINRES-QLP for Symmetric and Hermitian Linear Equations and Least-Squares Problems

    PubMed Central

    Choi, Sou-Cheng T.; Saunders, Michael A.

    2014-01-01

    We describe algorithm MINRES-QLP and its FORTRAN 90 implementation for solving symmetric or Hermitian linear systems or least-squares problems. If the system is singular, MINRES-QLP computes the unique minimum-length solution (also known as the pseudoinverse solution), which generally eludes MINRES. In all cases, it overcomes a potential instability in the original MINRES algorithm. A positive-definite pre-conditioner may be supplied. Our FORTRAN 90 implementation illustrates a design pattern that allows users to make problem data known to the solver but hidden and secure from other program units. In particular, we circumvent the need for reverse communication. Example test programs input and solve real or complex problems specified in Matrix Market format. While we focus here on a FORTRAN 90 implementation, we also provide and maintain MATLAB versions of MINRES and MINRES-QLP. PMID:25328255

  12. Software for the parallel adaptive solution of conservation laws by discontinous Galerkin methods.

    SciTech Connect

    Flaherty, J. E.; Loy, R. M.; Shephard, M. S.; Teresco, J. D.

    1999-08-17

    The authors develop software tools for the solution of conservation laws using parallel adaptive discontinuous Galerkin methods. In particular, the Rensselaer Partition Model (RPM) provides parallel mesh structures within an adaptive framework to solve the Euler equations of compressible flow by a discontinuous Galerkin method (LOCO). Results are presented for a Rayleigh-Taylor flow instability for computations performed on 128 processors of an IBM SP computer. In addition to managing the distributed data and maintaining a load balance, RPM provides information about the parallel environment that can be used to tailor partitions to a specific computational environment.

  13. The program LOPT for least-squares optimization of energy levels

    NASA Astrophysics Data System (ADS)

    Kramida, A. E.

    2011-02-01

    The article describes a program that solves the least-squares optimization problem for finding the energy levels of a quantum-mechanical system based on a set of measured energy separations or wavelengths of transitions between those energy levels, as well as determining the Ritz wavelengths of transitions and their uncertainties. The energy levels are determined by solving the matrix equation of the problem, and the uncertainties of the Ritz wavenumbers are determined from the covariance matrix of the problem. Program summaryProgram title: LOPT Catalogue identifier: AEHM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 19 254 No. of bytes in distributed program, including test data, etc.: 427 839 Distribution format: tar.gz Programming language: Perl v.5 Computer: PC, Mac, Unix workstations Operating system: MS Windows (XP, Vista, 7), Mac OS X, Linux, Unix (AIX) RAM: 3 Mwords or more Word size: 32 or 64 Classification: 2.2 Nature of problem: The least-squares energy-level optimization problem, i.e., finding a set of energy level values that best fits the given set of transition intervals. Solution method: The solution of the least-squares problem is found by solving the corresponding linear matrix equation, where the matrix is constructed using a new method with variable substitution. Restrictions: A practical limitation on the size of the problem N is imposed by the execution time, which scales as N and depends on the computer. Unusual features: Properly rounds the resulting data and formats the output in a format suitable for viewing with spreadsheet editing software. Estimates numerical errors resulting from the limited machine precision. Running time: 1 s for N=100, or 60 s for N=400 on a typical PC.

  14. Scaled first-order methods for a class of large-scale constrained least square problems

    NASA Astrophysics Data System (ADS)

    Coli, Vanna Lisa; Ruggiero, Valeria; Zanni, Luca

    2016-10-01

    Typical applications in signal and image processing often require the numerical solution of large-scale linear least squares problems with simple constraints, related to an m × n nonnegative matrix A, m « n. When the size of A is such that the matrix is not available in memory and only the operators of the matrix-vector products involving A and AT can be computed, forward-backward methods combined with suitable accelerating techniques are very effective; in particular, the gradient projection methods can be improved by suitable step-length rules or by an extrapolation/inertial step. In this work, we propose a further acceleration technique for both schemes, based on the use of variable metrics tailored for the considered problems. The numerical effectiveness of the proposed approach is evaluated on randomly generated test problems and real data arising from a problem of fibre orientation estimation in diffusion MRI.

  15. A Least-Squares Finite Element Method for Electromagnetic Scattering Problems

    NASA Technical Reports Server (NTRS)

    Wu, Jie; Jiang, Bo-nan

    1996-01-01

    The least-squares finite element method (LSFEM) is applied to electromagnetic scattering and radar cross section (RCS) calculations. In contrast to most existing numerical approaches, in which divergence-free constraints are omitted, the LSFF-M directly incorporates two divergence equations in the discretization process. The importance of including the divergence equations is demonstrated by showing that otherwise spurious solutions with large divergence occur near the scatterers. The LSFEM is based on unstructured grids and possesses full flexibility in handling complex geometry and local refinement Moreover, the LSFEM does not require any special handling, such as upwinding, staggered grids, artificial dissipation, flux-differencing, etc. Implicit time discretization is used and the scheme is unconditionally stable. By using a matrix-free iterative method, the computational cost and memory requirement for the present scheme is competitive with other approaches. The accuracy of the LSFEM is verified by several benchmark test problems.

  16. Sequential Least-Squares Using Orthogonal Transformations. [spacecraft communication/spacecraft tracking-data smoothing

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.

    1975-01-01

    Square root information estimation, starting from its beginnings in least-squares parameter estimation, is considered. Special attention is devoted to discussions of sensitivity and perturbation matrices, computed solutions and their formal statistics, consider-parameters and consider-covariances, and the effects of a priori statistics. The constant-parameter model is extended to include time-varying parameters and process noise, and the error analysis capabilities are generalized. Efficient and elegant smoothing results are obtained as easy consequences of the filter formulation. The value of the techniques is demonstrated by the navigation results that were obtained for the Mariner Venus-Mercury (Mariner 10) multiple-planetary space probe and for the Viking Mars space mission.

  17. The derivation of vector magnetic fields from Stokes profiles - Integral versus least squares fitting techniques

    NASA Technical Reports Server (NTRS)

    Ronan, R. S.; Mickey, D. L.; Orrall, F. Q.

    1987-01-01

    The results of two methods for deriving photospheric vector magnetic fields from the Zeeman effect, as observed in the Fe I line at 6302.5 A at high spectral resolution (45 mA), are compared. The first method does not take magnetooptical effects into account, but determines the vector magnetic field from the integral properties of the Stokes profiles. The second method is an iterative least-squares fitting technique which fits the observed Stokes profiles to the profiles predicted by the Unno-Rachkovsky solution to the radiative transfer equation. For sunspot fields above about 1500 gauss, the two methods are found to agree in derived azimuthal and inclination angles to within about + or - 20 deg.

  18. Concerning an application of the method of least squares with a variable weight matrix

    NASA Technical Reports Server (NTRS)

    Sukhanov, A. A.

    1979-01-01

    An estimate of a state vector for a physical system when the weight matrix in the method of least squares is a function of this vector is considered. An iterative procedure is proposed for calculating the desired estimate. Conditions for the existence and uniqueness of the limit of this procedure are obtained, and a domain is found which contains the limit estimate. A second method for calculating the desired estimate which reduces to the solution of a system of algebraic equations is proposed. The question of applying Newton's method of tangents to solving the given system of algebraic equations is considered and conditions for the convergence of the modified Newton's method are obtained. Certain properties of the estimate obtained are presented together with an example.

  19. Round-off error propagation in four generally applicable, recursive, least-squares-estimation schemes

    NASA Technical Reports Server (NTRS)

    Verhaegen, M. H.

    1987-01-01

    The numerical robustness of four generally applicable, recursive, least-squares-estimation schemes is analyzed by means of a theoretical round-off propagation study. This study highlights a number of practical, interesting insights of widely used recursive least-squares schemes. These insights have been confirmed in an experimental study as well.

  20. Assessing Fit and Dimensionality in Least Squares Metric Multidimensional Scaling Using Akaike's Information Criterion

    ERIC Educational Resources Information Center

    Ding, Cody S.; Davison, Mark L.

    2010-01-01

    Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…

  1. On the Significance of Properly Weighting Sorption Data for Least Squares Analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...

  2. Meshless Local Petrov-Galerkin Method for Bending Problems

    NASA Technical Reports Server (NTRS)

    Phillips, Dawn R.; Raju, Ivatury S.

    2002-01-01

    Recent literature shows extensive research work on meshless or element-free methods as alternatives to the versatile Finite Element Method. One such meshless method is the Meshless Local Petrov-Galerkin (MLPG) method. In this report, the method is developed for bending of beams - C1 problems. A generalized moving least squares (GMLS) interpolation is used to construct the trial functions, and spline and power weight functions are used as the test functions. The method is applied to problems for which exact solutions are available to evaluate its effectiveness. The accuracy of the method is demonstrated for problems with load discontinuities and continuous beam problems. A Petrov-Galerkin implementation of the method is shown to greatly reduce computational time and effort and is thus preferable over the previously developed Galerkin approach. The MLPG method for beam problems yields very accurate deflections and slopes and continuous moment and shear forces without the need for elaborate post-processing techniques.

  3. First-Order System Least-Squares for Second-Order Elliptic Problems with Discontinuous Coefficients

    NASA Technical Reports Server (NTRS)

    Manteuffel, Thomas A.; McCormick, Stephen F.; Starke, Gerhard

    1996-01-01

    The first-order system least-squares methodology represents an alternative to standard mixed finite element methods. Among its advantages is the fact that the finite element spaces approximating the pressure and flux variables are not restricted by the inf-sup condition and that the least-squares functional itself serves as an appropriate error measure. This paper studies the first-order system least-squares approach for scalar second-order elliptic boundary value problems with discontinuous coefficients. Ellipticity of an appropriately scaled least-squares bilinear form of the size of the jumps in the coefficients leading to adequate finite element approximation results. The occurrence of singularities at interface corners and cross-points is discussed. and a weighted least-squares functional is introduced to handle such cases. Numerical experiments are presented for two test problems to illustrate the performance of this approach.

  4. Least-Squares Multi-Angle Doppler Estimators for Plane Wave Vector Flow Imaging.

    PubMed

    Yiu, Billy Y S; Yu, Alfred C H

    2016-06-20

    Designing robust Doppler vector estimation strategies for use in plane wave imaging schemes based on unfocused transmissions is a topic that has yet to be studied in depth. One potential solution is to use a multi-angle Doppler estimation approach that computes flow vectors via least-squares fitting, but its performance has not been established. Here, we investigated the efficacy of multi-angle Doppler vector estimators by: (i) comparing its performance with respect to the classical dual-angle (cross-beam) Doppler vector estimator; (ii) examining the working effects of multi-angle Doppler vector estimators on flow visualization quality in the context of dynamic flow path rendering. Implementing Doppler vector estimators that use different combinations of transmit (Tx) and receive (Rx) steering angles, our analysis has compared the classical dual-angle Doppler method, a 5-Tx version of dual-angle Doppler, and various multi-angle Doppler configurations based on 3 Tx and 5 Tx. Two angle spans (10°, 20°) were examined in forming the steering angles. In imaging scenarios with known flow profiles (rotating disc and straight-tube parabolic flow), the 3-Tx, 3-Rx and 5-Tx, 5-Rx multi-angle configurations produced vector estimates with smaller variability comparing to the dual-angle method, and the estimation results were more consistent with the use of a 20° angle span. Flow vectors derived from multi-angle Doppler estimators were also found to be effective in rendering the expected flow paths in both rotating disc and straight-tube imaging scenarios, while the ones derived from the dual-angle estimator yielded flow paths that deviated from the expected course. These results serve to attest that, using multi-angle least-squares Doppler vector estimators, flow visualization can be consistently achieved.

  5. Least-Squares Multi-Angle Doppler Estimators for Plane-Wave Vector Flow Imaging.

    PubMed

    Yiu, Billy Y S; Yu, Alfred C H

    2016-11-01

    Designing robust Doppler vector estimation strategies for use in plane-wave imaging schemes based on unfocused transmissions is a topic that has yet to be studied in depth. One potential solution is to use a multi-angle Doppler estimation approach that computes flow vectors via least-squares fitting, but its performance has not been established. Here, we investigated the efficacy of multi-angle Doppler vector estimators by: 1) comparing its performance with respect to the classical dual-angle (cross-beam) Doppler vector estimator and 2) examining the working effects of multi-angle Doppler vector estimators on flow visualization quality in the context of dynamic flow path rendering. Implementing Doppler vector estimators that use different combinations of transmit (Tx) and receive (Rx) steering angles, our analysis has compared the classical dual-angle Doppler method, a 5-Tx version of dual-angle Doppler, and various multi-angle Doppler configurations based on 3 Tx and 5 Tx. Two angle spans (10°, 20°) were examined in forming the steering angles. In imaging scenarios with known flow profiles (rotating disk and straight-tube parabolic flow), the 3-Tx, 3-Rx and 5-Tx, 5-Rx multi-angle configurations produced vector estimates with smaller variability compared with the dual-angle method, and the estimation results were more consistent with the use of a 20° angle span. Flow vectors derived from multi-angle Doppler estimators were also found to be effective in rendering the expected flow paths in both rotating disk and straight-tube imaging scenarios, while the ones derived from the dual-angle estimator yielded flow paths that deviated from the expected course. These results serve to attest that using multi-angle least-squares Doppler vector estimators, flow visualization can be consistently achieved.

  6. Least-squares dual characterization for ROI assessment in emission tomography

    NASA Astrophysics Data System (ADS)

    Ben Bouallègue, F.; Crouzet, J. F.; Dubois, A.; Buvat, I.; Mariano-Goulart, D.

    2013-06-01

    Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.

  7. New prediction-augmented classical least squares (PACLS) methods: Application to unmodeled interferents

    SciTech Connect

    HAALAND,DAVID M.; MELGAARD,DAVID K.

    2000-01-26

    A significant improvement to the classical least squares (CLS) multivariate analysis method has been developed. The new method, called prediction-augmented classical least squares (PACLS), removes the restriction for CLS that all interfering spectral species must be known and their concentrations included during the calibration. The authors demonstrate that PACLS can correct inadequate CLS models if spectral components left out of the calibration can be identified and if their spectral shapes can be derived and added during a PACLS prediction step. The new PACLS method is demonstrated for a system of dilute aqueous solutions containing urea, creatinine, and NaCl analytes with and without temperature variations. The authors demonstrate that if CLS calibrations are performed using only a single analyte's concentration, then there is little, if any, prediction ability. However, if pure-component spectra of analytes left out of the calibration are independently obtained and added during PACLS prediction, then the CLS prediction ability is corrected and predictions become comparable to that of a CLS calibration that contains all analyte concentrations. It is also demonstrated that constant-temperature CLS models can be used to predict variable-temperature data by employing the PACLS method augmented by the spectral shape of a temperature change of the water solvent. In this case, PACLS can also be used to predict sample temperature with a standard error of prediction of 0.07 C even though the calibration data did not contain temperature variations. The PACLS method is also shown to be capable of modeling system drift to maintain a calibration in the presence of spectrometer drift.

  8. Discontinuous Galerkin method for piecewise regular solutions to the nonlinear age-structured population model.

    PubMed

    Krzyzanowski, Piotr; Wrzosek, Dariusz; Wit, Dominik

    2006-10-01

    A discontinuous Galerkin approximation of the nonlinear Lotka-McKendrick equation is considered in the frequent case when the solution is only piecewise regular. An O(h(r+1/2)) error estimate for rth order polynomial finite elements is proved, as well as piecewise H(1)-regularity of the exact solution which guarantees the error estimate for r=0. Certain implementational details which improve the robustness of the method are also addressed.

  9. A Bayesian least squares support vector machines based framework for fault diagnosis and failure prognosis

    NASA Astrophysics Data System (ADS)

    Khawaja, Taimoor Saleem

    A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior

  10. TDRSS-user orbit determination using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, Mina V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

    1993-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), and operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the January 17-23, 1991, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were less than 40 meters after the filter had reached steady state.

  11. Evaluation of Landsat-4 orbit determination accuracy using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Feiertag, R.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

    1993-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite (TDRS) System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the May 18-24, 1992, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. During this period, there were two separate orbit-adjust maneuvers on one of the TDRSS spacecraft (TDRS-East) and one small orbit-adjust maneuver for Landsat-4. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were generally less than 30 meters after the filter had reached steady state.

  12. The crux of the method: assumptions in ordinary least squares and logistic regression.

    PubMed

    Long, Rebecca G

    2008-10-01

    Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.

  13. A class of least-squares filtering and identification algorithms with systolic array architectures

    NASA Technical Reports Server (NTRS)

    Kalson, Seth Z.; Yao, Kung

    1991-01-01

    A unified approach is presented for deriving a large class of new and previously known time- and order-recursive least-squares algorithms with systolic array architectures, suitable for high-throughput-rate and VLSI implementations of space-time filtering and system identification problems. The geometrical derivation given is unique in that no assumption is made concerning the rank of the sample data correlation matrix. This method utilizes and extends the concept of oblique projections, as used previously in the derivations of the least-squares lattice algorithms. Exponentially weighted least-squares criteria are considered for both sliding and growing memory.

  14. Multi-element array signal reconstruction with adaptive least-squares algorithms

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1992-01-01

    Two versions of the adaptive least-squares algorithm are presented for combining signals from multiple feeds placed in the focal plane of a mechanical antenna whose reflector surface is distorted due to various deformations. Coherent signal combining techniques based on the adaptive least-squares algorithm are examined for nearly optimally and adaptively combining the outputs of the feeds. The performance of the two versions is evaluated by simulations. It is demonstrated for the example considered that both of the adaptive least-squares algorithms are capable of offsetting most of the loss in the antenna gain incurred due to reflector surface deformations.

  15. Combining existing numerical models with data assimilation using weighted least-squares finite element methods.

    PubMed

    Rajaraman, Prathish K; Manteuffel, T A; Belohlavek, M; Heys, Jeffrey J

    2017-01-01

    A new approach has been developed for combining and enhancing the results from an existing computational fluid dynamics model with experimental data using the weighted least-squares finite element method (WLSFEM). Development of the approach was motivated by the existence of both limited experimental blood velocity in the left ventricle and inexact numerical models of the same flow. Limitations of the experimental data include measurement noise and having data only along a two-dimensional plane. Most numerical modeling approaches do not provide the flexibility to assimilate noisy experimental data. We previously developed an approach that could assimilate experimental data into the process of numerically solving the Navier-Stokes equations, but the approach was limited because it required the use of specific finite element methods for solving all model equations and did not support alternative numerical approximation methods. The new approach presented here allows virtually any numerical method to be used for approximately solving the Navier-Stokes equations, and then the WLSFEM is used to combine the experimental data with the numerical solution of the model equations in a final step. The approach dynamically adjusts the influence of the experimental data on the numerical solution so that more accurate data are more closely matched by the final solution and less accurate data are not closely matched. The new approach is demonstrated on different test problems and provides significantly reduced computational costs compared with many previous methods for data assimilation. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Multiple concurrent recursive least squares identification with application to on-line spacecraft mass-property identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward (Inventor)

    2006-01-01

    The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.

  17. Least-squares finite element discretizations of neutron transport equations in 3 dimensions

    SciTech Connect

    Manteuffel, T.A; Ressel, K.J.; Starkes, G.

    1996-12-31

    The least-squares finite element framework to the neutron transport equation introduced in is based on the minimization of a least-squares functional applied to the properly scaled neutron transport equation. Here we report on some practical aspects of this approach for neutron transport calculations in three space dimensions. The systems of partial differential equations resulting from a P{sub 1} and P{sub 2} approximation of the angular dependence are derived. In the diffusive limit, the system is essentially a Poisson equation for zeroth moment and has a divergence structure for the set of moments of order 1. One of the key features of the least-squares approach is that it produces a posteriori error bounds. We report on the numerical results obtained for the minimum of the least-squares functional augmented by an additional boundary term using trilinear finite elements on a uniform tesselation into cubes.

  18. Beyond Principal Component Analysis: A Trilinear Decomposition Model and Least Squares Estimation.

    ERIC Educational Resources Information Center

    Pham, Tuan Dinh; Mocks, Joachim

    1992-01-01

    Sufficient conditions are derived for the consistency and asymptotic normality of the least squares estimator of a trilinear decomposition model for multiway data analysis. The limiting covariance matrix is computed. (Author/SLD)

  19. Adaptive slab laser beam quality improvement using a weighted least-squares reconstruction algorithm.

    PubMed

    Chen, Shanqiu; Dong, LiZhi; Chen, XiaoJun; Tan, Yi; Liu, Wenjin; Wang, Shuai; Yang, Ping; Xu, Bing; Ye, YuTang

    2016-04-10

    Adaptive optics is an important technology for improving beam quality in solid-state slab lasers. However, there are uncorrectable aberrations in partial areas of the beam. In the criterion of the conventional least-squares reconstruction method, it makes the zones with small aberrations nonsensitive and hinders this zone from being further corrected. In this paper, a weighted least-squares reconstruction method is proposed to improve the relative sensitivity of zones with small aberrations and to further improve beam quality. Relatively small weights are applied to the zones with large residual aberrations. Comparisons of results show that peak intensity in the far field improved from 1242 analog digital units (ADU) to 2248 ADU, and beam quality β improved from 2.5 to 2.0. This indicates the weighted least-squares method has better performance than the least-squares reconstruction method when there are large zonal uncorrectable aberrations in the slab laser system.

  20. A Discontinuous Petrov-Galerkin Methodology for Adaptive Solutions to the Incompressible Navier-Stokes Equations

    SciTech Connect

    Roberts, Nathan V.; Demkowiz, Leszek; Moser, Robert

    2015-11-15

    The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18, 20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates—the inf-sup constants governing the convergence are mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.

  1. On the accuracy of least squares methods in the presence of corner singularities

    NASA Technical Reports Server (NTRS)

    Cox, C. L.; Fix, G. J.

    1985-01-01

    Elliptic problems with corner singularities are discussed. Finite element approximations based on variational principles of the least squares type tend to display poor convergence properties in such contexts. Moreover, mesh refinement or the use of special singular elements do not appreciably improve matters. It is shown that if the least squares formulation is done in appropriately weighted space, then optimal convergence results in unweighted spaces like L(2).

  2. Simplified Least Squares Shadowing sensitivity analysis for chaotic ODEs and PDEs

    NASA Astrophysics Data System (ADS)

    Chater, Mario; Ni, Angxiu; Wang, Qiqi

    2017-01-01

    This paper develops a variant of the Least Squares Shadowing (LSS) method, which has successfully computed the derivative for several chaotic ODEs and PDEs. The development in this paper aims to simplify Least Squares Shadowing method by improving how time dilation is treated. Instead of adding an explicit time dilation term as in the original method, the new variant uses windowing, which can be more efficient and simpler to implement, especially for PDEs.

  3. Multi-element least square HDMR methods and their applications for stochastic multiscale model reduction

    SciTech Connect

    Jiang, Lijian Li, Xinping

    2015-08-01

    Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in each subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain

  4. Avoiding Communication in the Lanczos Bidiagonalization Routine and Associated Least Squares QR Solver

    DTIC Science & Technology

    2015-04-12

    Avoiding communication in the Lanczos bidiagonalization routine and associated Least Squares QR solver Erin Carson Electrical Engineering and...Bidiagonalization Routine and Associated Least Squares QR Solver 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...throughout scienti c codes , are often the bottlenecks in application perfor- mance due to a low computation/communication ratio. In this paper we develop

  5. Speckle evolution with multiple steps of least-squares phase removal

    SciTech Connect

    Chen Mingzhou; Dainty, Chris; Roux, Filippus S.

    2011-08-15

    We study numerically the evolution of speckle fields due to the annihilation of optical vortices after the least-squares phase has been removed. A process with multiple steps of least-squares phase removal is carried out to minimize both vortex density and scintillation index. Statistical results show that almost all the optical vortices can be removed from a speckle field, which finally decays into a quasiplane wave after such an iterative process.

  6. Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.

    PubMed

    Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann

    2017-01-01

    Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.

  7. Least Squares Magnetic-Field Optimization for Portable Nuclear Magnetic Resonance Magnet Design

    SciTech Connect

    Paulsen, Jeffrey L; Franck, John; Demas, Vasiliki; Bouchard, Louis-S.

    2008-03-27

    Single-sided and mobile nuclear magnetic resonance (NMR) sensors have the advantages of portability, low cost, and low power consumption compared to conventional high-field NMR and magnetic resonance imaging (MRI) systems. We present fast, flexible, and easy-to-implement target field algorithms for mobile NMR and MRI magnet design. The optimization finds a global optimum ina cost function that minimizes the error in the target magnetic field in the sense of least squares. When the technique is tested on a ring array of permanent-magnet elements, the solution matches the classical dipole Halbach solution. For a single-sided handheld NMR sensor, the algorithm yields a 640 G field homogeneous to 16 100 ppm across a 1.9 cc volume located 1.5 cm above the top of the magnets and homogeneous to 32 200 ppm over a 7.6 cc volume. This regime is adequate for MRI applications. We demonstrate that the homogeneous region can be continuously moved away from the sensor by rotating magnet rod elements, opening the way for NMR sensors with adjustable"sensitive volumes."

  8. A least squares fusion rule in multiple sensors distributed detection systems

    NASA Astrophysics Data System (ADS)

    Aziz, A. M.

    In this paper, a new least square data fusion rule in multiple sensor distributed detection system is proposed. In the proposed approach, the central processor combines the sensors hard decisions through least squares criterion to make the global hard decision of the central processor. In contrast to the optimum Neyman-Pearson fusion, where the distributed detection system is optimized at the fusion center level or at the sensors level, but not simultaneously, the proposed approach achieves global optimization at both the fusion center and at the distributed sensors levels. This is done without knowing the error probabilities of each individual distributed sensor. Thus the proposed least squares fusion rule does not rely on any stability of the noise environment and of the sensors false alarm and detection probabilities. Therefore, the proposed least squares fusion rule is robust and achieves better global performance. Furthermore, the proposed method can easily be applied to any number of sensors and any type of distributed observations. The performance of the proposed least squares fusion rule is evaluated and compared to the optimum Neyman-Pearson fusion rule. The results show that the performance of the proposed least squares fusion rule outperforms the performance of the Neyman-Pearson fusion rule.

  9. Least-Squares Regression and Spectral Residual Augmented Classical Least-Squares Chemometric Models for Stability-Indicating Analysis of Agomelatine and Its Degradation Products: A Comparative Study.

    PubMed

    Naguib, Ibrahim A; Abdelrahman, Maha M; El Ghobashy, Mohamed R; Ali, Nesma A

    2016-01-01

    Two accurate, sensitive, and selective stability-indicating methods are developed and validated for simultaneous quantitative determination of agomelatine (AGM) and its forced degradation products (Deg I and Deg II), whether in pure forms or in pharmaceutical formulations. Partial least-squares regression (PLSR) and spectral residual augmented classical least-squares (SRACLS) are two chemometric models that are being subjected to a comparative study through handling UV spectral data in range (215-350 nm). For proper analysis, a three-factor, four-level experimental design was established, resulting in a training set consisting of 16 mixtures containing different ratios of interfering species. An independent test set consisting of eight mixtures was used to validate the prediction ability of the suggested models. The results presented indicate the ability of mentioned multivariate calibration models to analyze AGM, Deg I, and Deg II with high selectivity and accuracy. The analysis results of the pharmaceutical formulations were statistically compared to the reference HPLC method, with no significant differences observed regarding accuracy and precision. The SRACLS model gives comparable results to the PLSR model; however, it keeps the qualitative spectral information of the classical least-squares algorithm for analyzed components.

  10. Compressible seal flow analysis using the finite element method with Galerkin solution technique

    NASA Technical Reports Server (NTRS)

    Zuk, J.

    1974-01-01

    High pressure gas sealing involves not only balancing the viscous force with the pressure gradient force but also accounting for fluid inertia--especially for choked flow. The conventional finite element method which uses a Rayleigh-Ritz solution technique is not convenient for nonlinear problems. For these problems, a finite element method with a Galerkin solution technique (FEMGST) was formulated. One example, a three-dimensional axisymmetric flow formulation has nonlinearities due to compressibility, area expansion, and convective inertia. Solutions agree with classical results in the limiting cases. The development of the choked flow velocity profile is shown.

  11. On non-combinatorial weighted total least squares with inequality constraints

    NASA Astrophysics Data System (ADS)

    Fang, Xing

    2014-08-01

    Observation systems known as errors-in-variables (EIV) models with model parameters estimated by total least squares (TLS) have been discussed for more than a century, though the terms EIV and TLS were coined much more recently. So far, it has only been shown that the inequality-constrained TLS (ICTLS) solution can be obtained by the combinatorial methods, assuming that the weight matrices of observations involved in the data vector and the data matrix are identity matrices. Although the previous works test all combinations of active sets or solution schemes in a clear way, some aspects have received little or no attention such as admissible weights, solution characteristics and numerical efficiency. Therefore, the aim of this study was to adjust the EIV model, subject to linear inequality constraints. In particular, (1) This work deals with a symmetrical positive-definite cofactor matrix that could otherwise be quite arbitrary. It also considers cross-correlations between cofactor matrices for the random coefficient matrix and the random observation vector. (2) From a theoretical perspective, we present first-order Karush-Kuhn-Tucker (KKT) necessary conditions and the second-order sufficient conditions of the inequality-constrained weighted TLS (ICWTLS) solution by analytical formulation. (3) From a numerical perspective, an active set method without combinatorial tests as well as a method based on sequential quadratic programming (SQP) is established. By way of applications, computational costs of the proposed algorithms are shown to be significantly lower than the currently existing ICTLS methods. It is also shown that the proposed methods can treat the ICWTLS problem in the case of more general weight matrices. Finally, we study the ICWTLS solution in terms of non-convex weighted TLS contours from a geometrical perspective.

  12. A two-dimensional Riemann solver with self-similar sub-structure - Alternative formulation based on least squares projection

    NASA Astrophysics Data System (ADS)

    Balsara, Dinshaw S.; Vides, Jeaniffer; Gurski, Katharine; Nkonga, Boniface; Dumbser, Michael; Garain, Sudip; Audit, Edouard

    2016-01-01

    Just as the quality of a one-dimensional approximate Riemann solver is improved by the inclusion of internal sub-structure, the quality of a multidimensional Riemann solver is also similarly improved. Such multidimensional Riemann problems arise when multiple states come together at the vertex of a mesh. The interaction of the resulting one-dimensional Riemann problems gives rise to a strongly-interacting state. We wish to endow this strongly-interacting state with physically-motivated sub-structure. The self-similar formulation of Balsara [16] proves especially useful for this purpose. While that work is based on a Galerkin projection, in this paper we present an analogous self-similar formulation that is based on a different interpretation. In the present formulation, we interpret the shock jumps at the boundary of the strongly-interacting state quite literally. The enforcement of the shock jump conditions is done with a least squares projection (Vides, Nkonga and Audit [67]). With that interpretation, we again show that the multidimensional Riemann solver can be endowed with sub-structure. However, we find that the most efficient implementation arises when we use a flux vector splitting and a least squares projection. An alternative formulation that is based on the full characteristic matrices is also presented. The multidimensional Riemann solvers that are demonstrated here use one-dimensional HLLC Riemann solvers as building blocks. Several stringent test problems drawn from hydrodynamics and MHD are presented to show that the method works. Results from structured and unstructured meshes demonstrate the versatility of our method. The reader is also invited to watch a video introduction to multidimensional Riemann solvers on http://www.nd.edu/ dbalsara/Numerical-PDE-Course.

  13. Advantages of soft versus hard constraints in self-modeling curve resolution problems. Alternating least squares with penalty functions.

    PubMed

    Gemperline, Paul J; Cash, Eric

    2003-08-15

    A new algorithm for self-modeling curve resolution (SMCR) that yields improved results by incorporating soft constraints is described. The method uses least squares penalty functions to implement constraints in an alternating least squares algorithm, including nonnegativity, unimodality, equality, and closure constraints. By using least squares penalty functions, soft constraints are formulated rather than hard constraints. Significant benefits are (obtained using soft constraints, especially in the form of fewer distortions due to noise in resolved profiles. Soft equality constraints can also be used to introduce incomplete or partial reference information into SMCR solutions. Four different examples demonstrating application of the new method are presented, including resolution of overlapped HPLC-DAD peaks, flow injection analysis data, and batch reaction data measured by UV/visible and near-infrared spectroscopy (NIR). Each example was selected to show one aspect of the significant advantages of soft constraints over traditionally used hard constraints. Incomplete or partial reference information into self-modeling curve resolution models is described. The method offers a substantial improvement in the ability to resolve time-dependent concentration profiles from mixture spectra recorded as a function of time.

  14. A Comparative Study of Different Reconstruction Schemes for a Reconstructed Discontinuous Galerkin Method on Arbitrary Grids

    SciTech Connect

    Hong Luo; Hanping Xiao; Robert Nourgaliev; Chunpei Cai

    2011-06-01

    A comparative study of different reconstruction schemes for a reconstruction-based discontinuous Galerkin, termed RDG(P1P2) method is performed for compressible flow problems on arbitrary grids. The RDG method is designed to enhance the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution via a reconstruction scheme commonly used in the finite volume method. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are implemented to obtain a quadratic polynomial representation of the underlying discontinuous Galerkin linear polynomial solution on each cell. These three reconstruction/recovery methods are compared for a variety of compressible flow problems on arbitrary meshes to access their accuracy and robustness. The numerical results demonstrate that all three reconstruction methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstruction method provides the best performance in terms of both accuracy and robustness.

  15. A Chebyshev condition for accelerating convergence of iterative tomographic methods-solving large least squares problems

    NASA Astrophysics Data System (ADS)

    Olson, Allen H.

    1987-08-01

    The Simultaneous Iterative Reconstruction Technique (SIRT) is a variation of Richardson's method for solving linear systems with positive definitive matrices, and can be used for solving any least squares problem. Previous SIRT methods used in tomography have suggested a constant normalization factor for the step size. With this normalization, the convergence rate of the eigencomponents decreases as the eigenvalue decreases, making these methods impractical for obtaining large bandwidth solutions. By allowing the normalization factor to change with each iteration, the error after k iterations is shown to be a k th order polynomial. The factors are then chosen to yield a Chebyshev polynomial so that the maximum error in the iterative method is minimized over a prescribed range of eigenvalues. Compared with k iterations using a constant normalization, the Chebyshev method requires only √ and has the property that all eigencomponents converge at the same rate. Simple expressions are given which permit the number of iterations to be determined in advanced based upon the desired accuracy and bandwidth. A stable ordering of the Chebyshev factors is also given which minimizes the effects of numerical roundoff. Since a good upper bound for the maximum eigenvalue of the normal matrix is essential to the calculations, the well known 'power method with shift of origin' is combined with the Chebyshev method to estimate its value.

  16. First-order system least-squares for the Helmholtz equation

    SciTech Connect

    Lee, B.; Manteuffel, T.; McCormick, S.; Ruge, J.

    1996-12-31

    We apply the FOSLS methodology to the exterior Helmholtz equation {Delta}p + k{sup 2}p = 0. Several least-squares functionals, some of which include both H{sup -1}({Omega}) and L{sup 2}({Omega}) terms, are examined. We show that in a special subspace of [H(div; {Omega}) {intersection} H(curl; {Omega})] x H{sup 1}({Omega}), each of these functionals are equivalent independent of k to a scaled H{sup 1}({Omega}) norm of p and u = {del}p. This special subspace does not include the oscillatory near-nullspace components ce{sup ik}({sup {alpha}x+{beta}y)}, where c is a complex vector and where {alpha}{sub 2} + {beta}{sup 2} = 1. These components are eliminated by applying a non-standard coarsening scheme. We achieve this scheme by introducing {open_quotes}ray{close_quotes} basis functions which depend on the parameter pair ({alpha}, {beta}), and which approximate ce{sup ik}({sup {alpha}x+{beta}y)} well on the coarser levels where bilinears cannot. We use several pairs of these parameters on each of these coarser levels so that several coarse grid problems are spun off from the finer levels. Some extensions of this theory to the transverse electric wave solution for Maxwell`s equations will also be presented.

  17. On sufficient statistics of least-squares superposition of vector sets.

    PubMed

    Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M

    2015-06-01

    The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.

  18. Reconstruction of vibroacoustic responses of a highly nonspherical structure using Helmholtz equation least-squares method.

    PubMed

    Lu, Huancai; Wu, Sean F

    2009-03-01

    The vibroacoustic responses of a highly nonspherical vibrating object are reconstructed using Helmholtz equation least-squares (HELS) method. The objectives of this study are to examine the accuracy of reconstruction and the impacts of various parameters involved in reconstruction using HELS. The test object is a simply supported and baffled thin plate. The reason for selecting this object is that it represents a class of structures that cannot be exactly described by the spherical Hankel functions and spherical harmonics, which are taken as the basis functions in the HELS formulation, yet the analytic solutions to vibroacoustic responses of a baffled plate are readily available so the accuracy of reconstruction can be checked accurately. The input field acoustic pressures for reconstruction are generated by the Rayleigh integral. The reconstructed normal surface velocities are validated against the benchmark values, and the out-of-plane vibration patterns at several natural frequencies are compared with the natural modes of a simply supported plate. The impacts of various parameters such as number of measurement points, measurement distance, location of the origin of the coordinate system, microphone spacing, and ratio of measurement aperture size to the area of source surface of reconstruction on the resultant accuracy of reconstruction are examined.

  19. Computing ordinary least-squares parameter estimates for the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2013-01-01

    A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.

  20. Least-squares/parabolized Navier-Stokes procedure for optimizing hypersonic wind tunnel nozzles

    NASA Technical Reports Server (NTRS)

    Korte, John J.; Kumar, Ajay; Singh, D. J.; Grossman, B.

    1991-01-01

    A new procedure is demonstrated for optimizing hypersonic wind-tunnel-nozzle contours. The procedure couples a CFD computer code to an optimization algorithm, and is applied to both conical and contoured hypersonic nozzles for the purpose of determining an optimal set of parameters to describe the surface geometry. A design-objective function is specified based on the deviation from the desired test-section flow-field conditions. The objective function is minimized by optimizing the parameters used to describe the nozzle contour based on the solution to a nonlinear least-squares problem. The effect of the changes in the nozzle wall parameters are evaluated by computing the nozzle flow using the parabolized Navier-Stokes equations. The advantage of the new procedure is that it directly takes into account the displacement effect of the boundary layer on the wall contour. The new procedure provides a method for optimizing hypersonic nozzles of high Mach numbers which have been designed by classical procedures, but are shown to produce poor flow quality due to the large boundary layers present in the test section. The procedure is demonstrated by finding the optimum design parameters for a Mach 10 conical nozzle and a Mach 6 and a Mach 15 contoured nozzle.

  1. Least squares regression methods for clustered ROC data with discrete covariates.

    PubMed

    Tang, Liansheng Larry; Zhang, Wei; Li, Qizhai; Ye, Xuan; Chan, Leighton

    2016-07-01

    The receiver operating characteristic (ROC) curve is a popular tool to evaluate and compare the accuracy of diagnostic tests to distinguish the diseased group from the nondiseased group when test results from tests are continuous or ordinal. A complicated data setting occurs when multiple tests are measured on abnormal and normal locations from the same subject and the measurements are clustered within the subject. Although least squares regression methods can be used for the estimation of ROC curve from correlated data, how to develop the least squares methods to estimate the ROC curve from the clustered data has not been studied. Also, the statistical properties of the least squares methods under the clustering setting are unknown. In this article, we develop the least squares ROC methods to allow the baseline and link functions to differ, and more importantly, to accommodate clustered data with discrete covariates. The methods can generate smooth ROC curves that satisfy the inherent continuous property of the true underlying curve. The least squares methods are shown to be more efficient than the existing nonparametric ROC methods under appropriate model assumptions in simulation studies. We apply the methods to a real example in the detection of glaucomatous deterioration. We also derive the asymptotic properties of the proposed methods.

  2. Canonical correlation analysis for multilabel classification: a least-squares formulation, extensions, and analysis.

    PubMed

    Sun, Liang; Ji, Shuiwang; Ye, Jieping

    2011-01-01

    Canonical Correlation Analysis (CCA) is a well-known technique for finding the correlations between two sets of multidimensional variables. It projects both sets of variables onto a lower-dimensional space in which they are maximally correlated. CCA is commonly applied for supervised dimensionality reduction in which the two sets of variables are derived from the data and the class labels, respectively. It is well-known that CCA can be formulated as a least-squares problem in the binary class case. However, the extension to the more general setting remains unclear. In this paper, we show that under a mild condition which tends to hold for high-dimensional data, CCA in the multilabel case can be formulated as a least-squares problem. Based on this equivalence relationship, efficient algorithms for solving least-squares problems can be applied to scale CCA to very large data sets. In addition, we propose several CCA extensions, including the sparse CCA formulation based on the 1-norm regularization. We further extend the least-squares formulation to partial least squares. In addition, we show that the CCA projection for one set of variables is independent of the regularization on the other set of multidimensional variables, providing new insights on the effect of regularization on CCA. We have conducted experiments using benchmark data sets. Experiments on multilabel data sets confirm the established equivalence relationships. Results also demonstrate the effectiveness and efficiency of the proposed CCA extensions.

  3. Least-squares reverse-time migration of Cranfield VSP data for monitoring CO2 injection

    NASA Astrophysics Data System (ADS)

    TAN, S.; Huang, L.

    2012-12-01

    Cost-effective monitoring for carbon utilization and sequestration requires high-resolution imaging with a minimal amount of data. Least-squares reverse-time migration is a promising imaging method for this purpose. We apply least-squares reverse-time migration to a portion of the 3D vertical seismic profile data acquired at the Cranfield enhanced oil recovery field in Mississippi for monitoring CO2 injection. Conventional reverse-time migration of limited data suffers from significant image artifacts and a poor image resolution. Lease-squares reverse-time migration can reduce image artifacts and improves the image resolution. We demonstrate the significant improvements of least-squares reverse-time migration by comparing its migration images of the Cranfield VSP data with that obtained using the conventional reverse-time migration.

  4. A Two-Layer Least Squares Support Vector Machine Approach to Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Liu, Jingli; Li, Jianping; Xu, Weixuan; Shi, Yong

    Least squares support vector machine (LS-SVM) is a revised version of support vector machine (SVM) and has been proved to be a useful tool for pattern recognition. LS-SVM had excellent generalization performance and low computational cost. In this paper, we propose a new method called two-layer least squares support vector machine which combines kernel principle component analysis (KPCA) and linear programming form of least square support vector machine. With this method sparseness and robustness is obtained while solving large dimensional and large scale database. A U.S. commercial credit card database is used to test the efficiency of our method and the result proved to be a satisfactory one.

  5. Maximum likelihood training of connectionist models: comparison with least squares back-propagation and logistic regression.

    PubMed Central

    Spackman, K. A.

    1991-01-01

    This paper presents maximum likelihood back-propagation (ML-BP), an approach to training neural networks. The widely reported original approach uses least squares back-propagation (LS-BP), minimizing the sum of squared errors (SSE). Unfortunately, least squares estimation does not give a maximum likelihood (ML) estimate of the weights in the network. Logistic regression, on the other hand, gives ML estimates for single layer linear models only. This report describes how to obtain ML estimates of the weights in a multi-layer model, and compares LS-BP to ML-BP using several examples. It shows that in many neural networks, least squares estimation gives inferior results and should be abandoned in favor of maximum likelihood estimation. Questions remain about the potential uses of multi-level connectionist models in such areas as diagnostic systems and risk-stratification in outcomes research. PMID:1807606

  6. Influence of the least-squares phase on optical vortices in strongly scintillated beams

    SciTech Connect

    Chen Mingzhou; Roux, Filippus S.

    2009-07-15

    The optical vortices that exist in strongly scintillated beams make it difficult for conventional adaptive optics systems to remove the phase distortions. When the least-squares reconstructed phase is removed, the vortices still remain. However, we found that the removal of the least-squares phase induces a portion of the vortices to be annihilated during subsequent propagation, causing a reduction in the total number of vortices. This can be understood in terms of the restoration of equilibrium between explicit vortices, which are visible in the phase function, and vortex bound states, which are somehow encoded in the continuous phase fluctuations. Numerical simulations are provided to show that the total number of optical vortices in a strongly scintillated beam can be reduced significantly after a few steps of least-squares phase corrections.

  7. Speckle noise removal applied to ultrasound image of carotid artery based on total least squares model.

    PubMed

    Yang, Lei; Lu, Jun; Dai, Ming; Ren, Li-Jie; Liu, Wei-Zong; Li, Zhen-Zhou; Gong, Xue-Hao

    2016-10-06

    An ultrasonic image speckle noise removal method by using total least squares model is proposed and applied onto images of cardiovascular structures such as the carotid artery. On the basis of the least squares principle, the related principle of minimum square method is applied to cardiac ultrasound image speckle noise removal process to establish the model of total least squares, orthogonal projection transformation processing is utilized for the output of the model, and the denoising processing for the cardiac ultrasound image speckle noise is realized. Experimental results show that the improved algorithm can greatly improve the resolution of the image, and meet the needs of clinical medical diagnosis and treatment of the cardiovascular system for the head and neck. Furthermore, the success in imaging of carotid arteries has strong implications in neurological complications such as stroke.

  8. A note on implementation of decaying product correlation structures for quasi-least squares.

    PubMed

    Shults, Justine; Guerra, Matthew W

    2014-08-30

    This note implements an unstructured decaying product matrix via the quasi-least squares approach for estimation of the correlation parameters in the framework of generalized estimating equations. The structure we consider is fairly general without requiring the large number of parameters that are involved in a fully unstructured matrix. It is straightforward to show that the quasi-least squares estimators of the correlation parameters yield feasible values for the unstructured decaying product structure. Furthermore, subject to conditions that are easily checked, the quasi-least squares estimators are valid for longitudinal Bernoulli data. We demonstrate implementation of the structure in a longitudinal clinical trial with both a continuous and binary outcome variable.

  9. Tropospheric refractivity and zenith path delays from least-squares collocation of meteorological and GNSS data

    NASA Astrophysics Data System (ADS)

    Wilgan, Karina; Hurter, Fabian; Geiger, Alain; Rohm, Witold; Bosy, Jarosław

    2017-02-01

    Precise positioning requires an accurate a priori troposphere model to enhance the solution quality. Several empirical models are available, but they may not properly characterize the state of troposphere, especially in severe weather conditions. Another possible solution is to use regional troposphere models based on real-time or near-real time measurements. In this study, we present the total refractivity and zenith total delay (ZTD) models based on a numerical weather prediction (NWP) model, Global Navigation Satellite System (GNSS) data and ground-based meteorological observations. We reconstruct the total refractivity profiles over the western part of Switzerland and the total refractivity profiles as well as ZTDs over Poland using the least-squares collocation software COMEDIE (Collocation of Meteorological Data for Interpretation and Estimation of Tropospheric Pathdelays) developed at ETH Zürich. In these two case studies, profiles of the total refractivity and ZTDs are calculated from different data sets. For Switzerland, the data set with the best agreement with the reference radiosonde (RS) measurements is the combination of ground-based meteorological observations and GNSS ZTDs. Introducing the horizontal gradients does not improve the vertical interpolation, and results in slightly larger biases and standard deviations. For Poland, the data set based on meteorological parameters from the NWP Weather Research and Forecasting (WRF) model and from a combination of the NWP model and GNSS ZTDs shows the best agreement with the reference RS data. In terms of ZTD, the combined NWP-GNSS observations and GNSS-only data set exhibit the best accuracy with an average bias (from all stations) of 3.7 mm and average standard deviations of 17.0 mm w.r.t. the reference GNSS stations.

  10. Least square neural network model of the crude oil blending process.

    PubMed

    Rubio, José de Jesús

    2016-06-01

    In this paper, the recursive least square algorithm is designed for the big data learning of a feedforward neural network. The proposed method as the combination of the recursive least square and feedforward neural network obtains four advantages over the alone algorithms: it requires less number of regressors, it is fast, it has the learning ability, and it is more compact. Stability, convergence, boundedness of parameters, and local minimum avoidance of the proposed technique are guaranteed. The introduced strategy is applied for the modeling of the crude oil blending process.

  11. Analysis of total least squares in estimating the parameters of a mortar trajectory

    SciTech Connect

    Lau, D.L.; Ng, L.C.

    1994-12-01

    Least Squares (LS) is a method of curve fitting used with the assumption that error exists in the observation vector. The method of Total Least Squares (TLS) is more useful in cases where there is error in the data matrix as well as the observation vector. This paper describes work done in comparing the LS and TLS results for parameter estimation of a mortar trajectory based on a time series of angular observations. To improve the results, we investigated several derivations of the LS and TLS methods, and early findings show TLS provided slightly, 10%, improved results over the LS method.

  12. A new algorithm for constrained nonlinear least-squares problems, part 1

    NASA Technical Reports Server (NTRS)

    Hanson, R. J.; Krogh, F. T.

    1983-01-01

    A Gauss-Newton algorithm is presented for solving nonlinear least squares problems. The problem statement may include simple bounds or more general constraints on the unknowns. The algorithm uses a trust region that allows the objective function to increase with logic for retreating to best values. The computations for the linear problem are done using a least squares system solver that allows for simple bounds and linear constraints. The trust region limits are defined by a box around the current point. In its current form the algorithm is effective only for problems with small residuals, linear constraints and dense Jacobian matrices. Results on a set of test problems are encouraging.

  13. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

  14. Explicit least squares system parameter identification for exact differential input/output models

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1993-01-01

    The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.

  15. Landsat-4 (TDRSS-user) orbit determination using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

    1992-01-01

    TDRSS user orbit determination is analyzed using a batch least-squares method and a sequential estimation method. It was found that in the batch least-squares method analysis, the orbit determination consistency for Landsat-4, which was heavily tracked by TDRSS during January 1991, was about 4 meters in the rms overlap comparisons and about 6 meters in the maximum position differences in overlap comparisons. The consistency was about 10 to 30 meters in the 3 sigma state error covariance function in the sequential method analysis. As a measure of consistency, the first residual of each pass was within the 3 sigma bound in the residual space.

  16. The least-squares mixing models to generate fraction images derived from remote sensing multispectral data

    NASA Technical Reports Server (NTRS)

    Shimabukuro, Yosio Edemir; Smith, James A.

    1991-01-01

    Constrained-least-squares and weighted-least-squares mixing models for generating fraction images derived from remote sensing multispectral data are presented. An experiment considering three components within the pixels-eucalyptus, soil (understory), and shade-was performed. The generated fraction images for shade (shade image) derived from these two methods were compared by considering the performance and computer time. The derived shade images are related to the observed variation in forest structure, i.e., the fraction of inferred shade in the pixel is related to different eucalyptus ages.

  17. Prediction model of sinoatrial node field potential using high order partial least squares.

    PubMed

    Feng, Yu; Cao, Hui; Zhang, Yanbin

    2015-01-01

    High order partial least squares (HOPLS) is a novel data processing method. It is highly suitable for building prediction model which has tensor input and output. The objective of this study is to build a prediction model of the relationship between sinoatrial node field potential and high glucose using HOPLS. The three sub-signals of the sinoatrial node field potential made up the model's input. The concentration and the actuation duration of high glucose made up the model's output. The results showed that on the premise of predicting two dimensional variables, HOPLS had the same predictive ability and a lower dispersion degree compared with partial least squares (PLS).

  18. Fruit fly optimization based least square support vector regression for blind image restoration

    NASA Astrophysics Data System (ADS)

    Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei

    2014-11-01

    The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and

  19. AVIRIS study of Death Valley evaporite deposits using least-squares band-fitting methods

    NASA Technical Reports Server (NTRS)

    Crowley, J. K.; Clark, R. N.

    1992-01-01

    Minerals found in playa evaporite deposits reflect the chemically diverse origins of ground waters in arid regions. Recently, it was discovered that many playa minerals exhibit diagnostic visible and near-infrared (0.4-2.5 micron) absorption bands that provide a remote sensing basis for observing important compositional details of desert ground water systems. The study of such systems is relevant to understanding solute acquisition, transport, and fractionation processes that are active in the subsurface. Observations of playa evaporites may also be useful for monitoring the hydrologic response of desert basins to changing climatic conditions on regional and global scales. Ongoing work using Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data to map evaporite minerals in the Death Valley salt pan is described. The AVIRIS data point to differences in inflow water chemistry in different parts of the Death Valley playa system and have led to the discovery of at least two new North American mineral occurrences. Seven segments of AVIRIS data were acquired over Death Valley on 31 July 1990, and were calibrated to reflectance by using the spectrum of a uniform area of alluvium near the salt pan. The calibrated data were subsequently analyzed by using least-squares spectral band-fitting methods, first described by Clark and others. In the band-fitting procedure, AVIRIS spectra are fit compared over selected wavelength intervals to a series of library reference spectra. Output images showing the degree of fit, band depth, and fit times the band depth are generated for each reference spectrum. The reference spectra used in the study included laboratory data for 35 pure evaporite spectra extracted from the AVIRIS image cube. Additional details of the band-fitting technique are provided by Clark and others elsewhere in this volume.

  20. Fast integer least-squares estimation for GNSS high-dimensional ambiguity resolution using lattice theory

    NASA Astrophysics Data System (ADS)

    Jazaeri, S.; Amiri-Simkooei, A. R.; Sharifi, M. A.

    2012-02-01

    GNSS ambiguity resolution is the key issue in the high-precision relative geodetic positioning and navigation applications. It is a problem of integer programming plus integer quality evaluation. Different integer search estimation methods have been proposed for the integer solution of ambiguity resolution. Slow rate of convergence is the main obstacle to the existing methods where tens of ambiguities are involved. Herein, integer search estimation for the GNSS ambiguity resolution based on the lattice theory is proposed. It is mathematically shown that the closest lattice point problem is the same as the integer least-squares (ILS) estimation problem and that the lattice reduction speeds up searching process. We have implemented three integer search strategies: Agrell, Eriksson, Vardy, Zeger (AEVZ), modification of Schnorr-Euchner enumeration (M-SE) and modification of Viterbo-Boutros enumeration (M-VB). The methods have been numerically implemented in several simulated examples under different scenarios and over 100 independent runs. The decorrelation process (or unimodular transformations) has been first used to transform the original ILS problem to a new one in all simulations. We have then applied different search algorithms to the transformed ILS problem. The numerical simulations have shown that AEVZ, M-SE, and M-VB are about 320, 120 and 50 times faster than LAMBDA, respectively, for a search space of dimension 40. This number could change to about 350, 160 and 60 for dimension 45. The AEVZ is shown to be faster than MLAMBDA by a factor of 5. Similar conclusions could be made using the application of the proposed algorithms to the real GPS data.

  1. A Geometric Analysis of when Fixed Weighting Schemes Will Outperform Ordinary Least Squares

    ERIC Educational Resources Information Center

    Davis-Stober, Clintin P.

    2011-01-01

    Many researchers have demonstrated that fixed, exogenously chosen weights can be useful alternatives to Ordinary Least Squares (OLS) estimation within the linear model (e.g., Dawes, Am. Psychol. 34:571-582, 1979; Einhorn & Hogarth, Org. Behav. Human Perform. 13:171-192, 1975; Wainer, Psychol. Bull. 83:213-217, 1976). Generalizing the approach of…

  2. Assessing Compliance-Effect Bias in the Two Stage Least Squares Estimator

    ERIC Educational Resources Information Center

    Reardon, Sean; Unlu, Fatih; Zhu, Pei; Bloom, Howard

    2011-01-01

    The proposed paper studies the bias in the two-stage least squares, or 2SLS, estimator that is caused by the compliance-effect covariance (hereafter, the compliance-effect bias). It starts by deriving the formula for the bias in an infinite sample (i.e., in the absence of finite sample bias) under different circumstances. Specifically, it…

  3. ON ASYMPTOTIC DISTRIBUTION AND ASYMPTOTIC EFFICIENCY OF LEAST SQUARES ESTIMATORS OF SPATIAL VARIOGRAM PARAMETERS. (R827257)

    EPA Science Inventory

    Abstract

    In this article, we consider the least-squares approach for estimating parameters of a spatial variogram and establish consistency and asymptotic normality of these estimators under general conditions. Large-sample distributions are also established under a sp...

  4. Using R^2 to compare least-squares fit models: When it must fail

    Technology Transfer Automated Retrieval System (TEKTRAN)

    R^2 can be used correctly to select from among competing least-squares fit models when the data are fitted in common form and with common weighting. However, then R^2 comparisons become equivalent to comparisons of the estimated fit variance s^2 in unweighted fitting, or of the reduced chi-square in...

  5. Representing Topography with Second-Degree Bivariate Polynomial Functions Fitted by Least Squares.

    ERIC Educational Resources Information Center

    Neuman, Arthur Edward

    1987-01-01

    There is a need for abstracting topography other than for mapping purposes. The method employed should be simple and available to non-specialists, thereby ruling out spline representations. Generalizing from univariate first-degree least squares and from multiple regression, this article introduces bivariate polynomial functions fitted by least…

  6. Regression with Qualitative and Quantitative Variables: An Alternating Least Squares Method with Optimal Scaling Features

    ERIC Educational Resources Information Center

    And Others; Young, Forrest W.

    1976-01-01

    A method is discussed which extends canonical regression analysis to the situation where the variables may be measured as nominal, ordinal, or interval, and where they may be either continuous or discrete. The method, which is purely descriptive, uses an alternating least squares algorithm and is robust. Examples are provided. (Author/JKS)

  7. Partial least squares correspondence analysis: A framework to simultaneously analyze behavioral and genetic data.

    PubMed

    Beaton, Derek; Dunlop, Joseph; Abdi, Hervé

    2016-12-01

    For nearly a century, detecting the genetic contributions to cognitive and behavioral phenomena has been a core interest for psychological research. Recently, this interest has been reinvigorated by the availability of genotyping technologies (e.g., microarrays) that provide new genetic data, such as single nucleotide polymorphisms (SNPs). These SNPs-which represent pairs of nucleotide letters (e.g., AA, AG, or GG) found at specific positions on human chromosomes-are best considered as categorical variables, but this coding scheme can make difficult the multivariate analysis of their relationships with behavioral measurements, because most multivariate techniques developed for the analysis between sets of variables are designed for quantitative variables. To palliate this problem, we present a generalization of partial least squares-a technique used to extract the information common to 2 different data tables measured on the same observations-called partial least squares correspondence analysis-that is specifically tailored for the analysis of categorical and mixed ("heterogeneous") data types. Here, we formally define and illustrate-in a tutorial format-how partial least squares correspondence analysis extends to various types of data and design problems that are particularly relevant for psychological research that include genetic data. We illustrate partial least squares correspondence analysis with genetic, behavioral, and neuroimaging data from the Alzheimer's Disease Neuroimaging Initiative. R code is available on the Comprehensive R Archive Network and via the authors' websites. (PsycINFO Database Record

  8. Noise suppression using preconditioned least-squares prestack time migration: application to the Mississippian limestone

    NASA Astrophysics Data System (ADS)

    Guo, Shiguang; Zhang, Bo; Wang, Qing; Cabrales-Vargas, Alejandro; Marfurt, Kurt J.

    2016-08-01

    Conventional Kirchhoff migration often suffers from artifacts such as aliasing and acquisition footprint, which come from sub-optimal seismic acquisition. The footprint can mask faults and fractures, while aliased noise can focus into false coherent events which affect interpretation and contaminate amplitude variation with offset, amplitude variation with azimuth and elastic inversion. Preconditioned least-squares migration minimizes these artifacts. We implement least-squares migration by minimizing the difference between the original data and the modeled demigrated data using an iterative conjugate gradient scheme. Unpreconditioned least-squares migration better estimates the subsurface amplitude, but does not suppress aliasing. In this work, we precondition the results by applying a 3D prestack structure-oriented LUM (lower-upper-middle) filter to each common offset and common azimuth gather at each iteration. The preconditioning algorithm not only suppresses aliasing of both signal and noise, but also improves the convergence rate. We apply the new preconditioned least-squares migration to the Marmousi model and demonstrate how it can improve the seismic image compared with conventional migration, and then apply it to one survey acquired over a new resource play in the Mid-Continent, USA. The acquisition footprint from the targets is attenuated and the signal to noise ratio is enhanced. To demonstrate the impact on interpretation, we generate a suite of seismic attributes to image the Mississippian limestone, and show that the karst-enhanced fractures in the Mississippian limestone can be better illuminated.

  9. Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Bentler, Peter M.

    2000-01-01

    Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)

  10. Analyzing Multilevel Data: Comparing Findings from Hierarchical Linear Modeling and Ordinary Least Squares Regression

    ERIC Educational Resources Information Center

    Rocconi, Louis M.

    2013-01-01

    This study examined the differing conclusions one may come to depending upon the type of analysis chosen, hierarchical linear modeling or ordinary least squares (OLS) regression. To illustrate this point, this study examined the influences of seniors' self-reported critical thinking abilities three ways: (1) an OLS regression with the student…

  11. Superresolution of 3-D computational integral imaging based on moving least square method.

    PubMed

    Kim, Hyein; Lee, Sukho; Ryu, Taekyung; Yoon, Jungho

    2014-11-17

    In this paper, we propose an edge directive moving least square (ED-MLS) based superresolution method for computational integral imaging reconstruction(CIIR). Due to the low resolution of the elemental images and the alignment error of the microlenses, it is not easy to obtain an accurate registration result in integral imaging, which makes it difficult to apply superresolution to the CIIR application. To overcome this problem, we propose the edge directive moving least square (ED-MLS) based superresolution method which utilizes the properties of the moving least square. The proposed ED-MLS based superresolution takes the direction of the edge into account in the moving least square reconstruction to deal with the abrupt brightness changes in the edge regions, and is less sensitive to the registration error. Furthermore, we propose a framework which shows how the data have to be collected for the superresolution problem in the CIIR application. Experimental results verify that the resolution of the elemental images is enhanced, and that a high resolution reconstructed 3-D image can be obtained with the proposed method.

  12. Uncertainty in calculating vorticity from 2D velocity fields using circulation and least-squares approaches

    NASA Astrophysics Data System (ADS)

    Abrahamson, S.; Lonnes, S.

    1995-11-01

    The most common method for determining vorticity from planar velocity information is the circulation method. Its performance has been evaluated using a plane of velocity data obtained from a direct numerical simulation (DNS) of a three dimensional plane shear layer. Both the ability to reproduce the vorticity from the exact velocity field and one perturbed by a 5% random “uncertainty” were assessed. To minimize the sensitivity to velocity uncertainties, a new method was developed using a least-squares approach. The local velocity data is fit to a model velocity field consisting of uniform translation, rigid rotation, a point source, and plane shear. The least-squares method was evaluated in the same manner as the circulation method. The largest differences between the actual and calculated vorticity fields were due to the filter-like nature of the methods. The new method is less sensitive to experimental uncertainty. However the circulation method proved to be slightly better at reproducing the DNS field. The least-squares method provides additional information beyond the circulation method results. Using the correlation overline {Pω ω } and a vorticity threshold criteria to identify regions of rigid rotation (or eddies), the rigid rotation component of the least-squares method indicates these same regions.

  13. Interpreting the Results of Weighted Least-Squares Regression: Caveats for the Statistical Consumer.

    ERIC Educational Resources Information Center

    Willett, John B.; Singer, Judith D.

    In research, data sets often occur in which the variance of the distribution of the dependent variable at given levels of the predictors is a function of the values of the predictors. In this situation, the use of weighted least-squares (WLS) or techniques is required. Weights suitable for use in a WLS regression analysis must be estimated. A…

  14. Linking Socioeconomic Status to Social Cognitive Career Theory Factors: A Partial Least Squares Path Modeling Analysis

    ERIC Educational Resources Information Center

    Huang, Jie-Tsuen; Hsieh, Hui-Hsien

    2011-01-01

    The purpose of this study was to investigate the contributions of socioeconomic status (SES) in predicting social cognitive career theory (SCCT) factors. Data were collected from 738 college students in Taiwan. The results of the partial least squares (PLS) analyses indicated that SES significantly predicted career decision self-efficacy (CDSE);…

  15. Investigating Importance Weighting of Satisfaction Scores from a Formative Model with Partial Least Squares Analysis

    ERIC Educational Resources Information Center

    Wu, Chia-Huei; Chen, Lung Hung; Tsai, Ying-Mei

    2009-01-01

    This study introduced a formative model to investigate the utility of importance weighting on satisfaction scores with partial least squares analysis. Based on the bottom-up theory of satisfaction evaluations, the measurement structure for weighted/unweighted domain satisfaction scores was modeled as a formative model, whereas the measurement…

  16. The Use of Orthogonal Distances in Generating the Total Least Squares Estimate

    ERIC Educational Resources Information Center

    Glaister, P.

    2005-01-01

    The method of least squares enables the determination of an estimate of the slope and intercept of a straight line relationship between two quantities or variables X and Y. Although a theoretical relationship may exist between X and Y of the form Y = mX + c, in practice experimental or measurement errors will occur, and the observed or measured…

  17. Using Technology to Optimize and Generalize: The Least-Squares Line

    ERIC Educational Resources Information Center

    Burke, Maurice J.; Hodgson, Ted R.

    2007-01-01

    With the help of technology and a basic high school algebra method for finding the vertex of a quadratic polynomial, students can develop and prove the formula for least-squares lines. Students are exposed to the power of a computer algebra system to generalize processes they understand and to see deeper patterns in those processes. (Contains 4…

  18. [Locally weighted least squares estimation of DPOAE evoked by continuously sweeping primaries].

    PubMed

    Han, Xiaoli; Fu, Xinxing; Cui, Jie; Xiao, Ling

    2013-12-01

    Distortion product otoacoustic emission (DPOAE) signal can be used for diagnosis of hearing loss so that it has an important clinical value. Continuously using sweeping primaries to measure DPOAE provides an efficient tool to record DPOAE data rapidly when DPOAE is measured in a large frequency range. In this paper, locally weighted least squares estimation (LWLSE) of 2f1-f2 DPOAE is presented based on least-squares-fit (LSF) algorithm, in which DPOAE is evoked by continuously sweeping tones. In our study, we used a weighted error function as the loss function and the weighting matrixes in the local sense to obtain a smaller estimated variance. Firstly, ordinary least squares estimation of the DPOAE parameters was obtained. Then the error vectors were grouped and the different local weighting matrixes were calculated in each group. And finally, the parameters of the DPOAE signal were estimated based on least squares estimation principle using the local weighting matrixes. The simulation results showed that the estimate variance and fluctuation errors were reduced, so the method estimates DPOAE and stimuli more accurately and stably, which facilitates extraction of clearer DPOAE fine structure.

  19. New method to incorporate Type B uncertainty into least-squares procedures in radionuclide metrology.

    PubMed

    Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei

    2016-03-01

    We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods.

  20. An Extension of Least Squares Estimation of IRT Linking Coefficients for the Graded Response Model

    ERIC Educational Resources Information Center

    Kim, Seonghoon

    2010-01-01

    The three types (generalized, unweighted, and weighted) of least squares methods, proposed by Ogasawara, for estimating item response theory (IRT) linking coefficients under dichotomous models are extended to the graded response model. A simulation study was conducted to confirm the accuracy of the extended formulas, and a real data study was…

  1. Conjunctive and Disjunctive Extensions of the Least Squares Distance Model of Cognitive Diagnosis

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.; Atanasov, Dimitar V.

    2012-01-01

    Many models of cognitive diagnosis, including the "least squares distance model" (LSDM), work under the "conjunctive" assumption that a correct item response occurs when all latent attributes required by the item are correctly performed. This article proposes a "disjunctive" version of the LSDM under which the correct item response occurs when "at…

  2. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…

  3. A Comparison of Mean Phase Difference and Generalized Least Squares for Analyzing Single-Case Data

    ERIC Educational Resources Information Center

    Manolov, Rumen; Solanas, Antonio

    2013-01-01

    The present study focuses on single-case data analysis specifically on two procedures for quantifying differences between baseline and treatment measurements. The first technique tested is based on generalized least square regression analysis and is compared to a proposed non-regression technique, which allows obtaining similar information. The…

  4. Risk Bounds for Regularized Least-Squares Algorithm with Operator-Value Kernels

    DTIC Science & Technology

    2005-05-16

    for regularized least-squares algorithm with operator-valued kernels Ernesto De Vito a Andrea Caponnetto b aDipartimento di Matematica , Università...0915, National Science Foundation (ITR/SYS) Contract No. IIS - 0112991, National Science Foundation (ITR) Contract No. IIS -0209289, National Science

  5. Lane detection and tracking based on improved Hough transform and least-squares method

    NASA Astrophysics Data System (ADS)

    Sun, Peng; Chen, Hui

    2014-11-01

    Lane detection and tracking play important roles in lane departure warning system (LDWS). In order to improve the real-time performance and obtain better lane detection results, an improved algorithm of lane detection and tracking based on combination of improved Hough transform and least-squares fitting method is proposed in this paper. In the image pre-processing stage, firstly a multi-gradient Sobel operator is used to obtain the edge map of road images, secondly adaptive Otsu algorithm is used to obtain binary image, and in order to meet the precision requirements of single pixel, fast parallel thinning algorithm is used to get the skeleton map of binary image. And then, lane lines are initially detected by using polar angle constraint Hough transform, which has narrowed the scope of searching. At last, during the tracking phase, based on the detection result of the previous image frame, a dynamic region of interest (ROI) is set up, and within the predicted dynamic ROI, least-squares fitting method is used to fit the lane line, which has greatly reduced the algorithm calculation. And also a failure judgment module is added in this paper to improve the detection reliability. When the least-squares fitting method is failed, the polar angle constraint Hough transform is restarted for initial detection, which has achieved a coordination of Hough transform and least-squares fitting method. The algorithm in this paper takes into account the robustness of Hough transform and the real-time performance of least-squares fitting method, and sets up a dynamic ROI for lane detection. Experimental results show that it has a good performance of lane recognition, and the average time to complete the preprocessing and lane recognition of one road map is less than 25ms, which has proved that the algorithm has good real-time performance and strong robustness.

  6. Comparison of ordinary, weighted, and generalized least-squares straight-line calibrations for LC-MS-MS, GC-MS, HPLC, GC, and enzymatic assay.

    PubMed

    Duer, Wayne C; Ogren, Paul J; Meetze, Alison; Kitchen, Chester J; Von Lindern, Ryan; Yaworsky, Dustin C; Boden, Christopher; Gayer, Jeffery A

    2008-06-01

    The impact of experimental errors in one or both variables on the use of linear least-squares was investigated for method calibrations (response = intercept plus slope times concentration, or equivalently, Y = a(1) + a(2)X ) frequently used in analytical toxicology. In principle, the most reliable calibrations should consider errors from all sources, but consideration of concentration (X) uncertainties has not been common due to complex fitting algorithm requirements. Data were obtained for liquid chromatography-tandem mass spectrometry, gas chromatography-mass spectrometry, high-performance liquid chromatography, gas chromatography, and enzymatic assay. The required experimental uncertainties in response were obtained from replicate measurements. The required experimental uncertainties in concentration were determined from manufacturers' furnished uncertainties in stock solutions coupled with uncertainties imparted by dilution techniques. The mathematical fitting techniques used in the investigation were ordinary least-squares, weighted least-squares (WOLS), and generalized least-squares (GLS). GLS best-fit results, obtained with an efficient iteration algorithm implemented in a spreadsheet format, are used with a modified WOLS-based formula to derive reliable uncertainties in calculated concentrations. It was found that while the values of the intercepts and slopes were not markedly different for the different techniques, the derived uncertainties in parameters were different. Such differences can significantly affect the predicted uncertainties in concentrations derived from the use of the different linear least-squares equations.

  7. Dynamic least-squares kernel density modeling of Fokker-Planck equations with application to neural population

    NASA Astrophysics Data System (ADS)

    Shotorban, Babak

    2010-04-01

    The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.

  8. Analysis of p-multigrid solution schemes for discontinuous Galerkin discretizations of flow problems

    NASA Astrophysics Data System (ADS)

    Mascarenhas, Brendan S.

    p-multigrid is a 'multigrid-like' algorithm used to obtain solutions to high-order hp-finite element discretizations. In this method convergence is accelerated by using coarse levels constructed by reducing the order, p, of the approximating polynomial. We have investigated p-multigrid coupled with preconditioned block relaxation schemes to obtain the steady-state solution to discontinuous Galerkin (DG) discretizations of the Euler equations. Block-diagonal, -line, and sweeping preconditioners, and also the alternate direction implicit (ADI), and the incomplete lower-upper (ILU(0)) preconditioners are considered. Relaxation schemes that approximately-invert (AI) the steady-state stiffness matrix and implicit psuedo time-advancing (ITA) schemes are Fourier analyzed and compared. In general, for orders of approximating polynomial p ≥ 2, the AI schemes perform better than the similarly preconditioned ITA schemes. The results show that p-multigrid iterations of the AI-ILU(0) scheme with under-relaxation o = 1/2 converge fastest and are the most robust of the schemes studied. Similar to prior observations by Helenbrook and Atkins p-multigrid was observed to behave anomalously when p transitions from 1 to 0. Using ideas from Helenbrook and Atkins correction for diffusion, and the streamwise upwind Petrov-Galerkin (SUPG) formulation, this anomalous behavior is corrected for the 1D convection equation. The correction is then extended to the 1D convection-diffusion equation.

  9. Periodic quasi-orthogonal spline bases and applications to least-squares curve fitting of digital images.

    PubMed

    Flickner, M; Hafner, J; Rodriguez, E J; Sanz, J C

    1996-01-01

    Presents a new covariant basis, dubbed the quasi-orthogonal Q-spline basis, for the space of n-degree periodic uniform splines with k knots. This basis is obtained analogously to the B-spline basis by scaling and periodically translating a single spline function of bounded support. The construction hinges on an important theorem involving the asymptotic behavior (in the dimension) of the inverse of banded Toeplitz matrices. The authors show that the Gram matrix for this basis is nearly diagonal, hence, the name "quasi-orthogonal". The new basis is applied to the problem of approximating closed digital curves in 2D images by least-squares fitting. Since the new spline basis is almost orthogonal, the least-squares solution can be approximated by decimating a convolution between a resolution-dependent kernel and the given data. The approximating curve is expressed as a linear combination of the new spline functions and new "control points". Another convolution maps these control points to the classical B-spline control points. A generalization of the result has relevance to the solution of regularized fitting problems.

  10. Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report

    SciTech Connect

    Saad, Yousef

    2014-01-16

    The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the

  11. A note on the total least squares problem for coplanar points

    SciTech Connect

    Lee, S.L.

    1994-09-01

    The Total Least Squares (TLS) fit to the points (x{sub k}, y{sub k}), k = 1, {hor_ellipsis}, n, minimizes the sum of the squares of the perpendicular distances from the points to the line. This sum is the TLS error, and minimizing its magnitude is appropriate if x{sub k} and y{sub k} are uncertain. A priori formulas for the TLS fit and TLS error to coplanar points were originally derived by Pearson, and they are expressed in terms of the mean, standard deviation and correlation coefficient of the data. In this note, these TLS formulas are derived in a more elementary fashion. The TLS fit is obtained via the ordinary least squares problem and the algebraic properties of complex numbers. The TLS error is formulated in terms of the triangle inequality for complex numbers.

  12. Compact moving least squares: An optimization framework for generating high-order compact meshless discretizations

    NASA Astrophysics Data System (ADS)

    Trask, Nathaniel; Maxey, Martin; Hu, Xiaozhe

    2016-12-01

    A generalization of the optimization framework typically used in moving least squares is presented that provides high-order approximation while maintaining compact stencils and a consistent treatment of boundaries. The approach, which we refer to as compact moving least squares, resembles the capabilities of compact finite differences but requires no structure in the underlying set of nodes. An efficient collocation scheme is used to demonstrate the capabilities of the method to solve elliptic boundary value problems in strong form stably without the need for an expensive weak form. The flexibility of the approach is demonstrated by using the same framework to both solve a variety of elliptic problems and to generate implicit approximations to derivatives. Finally, an efficient preconditioner is presented for the steady Stokes equations, and the approach's efficiency and high order of accuracy is demonstrated for domains with curvi-linear boundaries.

  13. Moving least-squares enhanced Shepard interpolation for the fast marching and string methods

    NASA Astrophysics Data System (ADS)

    Burger, Steven K.; Liu, Yuli; Sarkar, Utpal; Ayers, Paul W.

    2009-01-01

    The number of the potential energy calculations required by the quadratic string method (QSM), and the fast marching method (FMM) is significantly reduced by using Shepard interpolation, with a moving least squares to fit the higher-order derivatives of the potential. The derivatives of the potential are fitted up to fifth order. With an error estimate for the interpolated values, this moving least squares enhanced Shepard interpolation scheme drastically reduces the number of potential energy calculations in FMM, often by up 80%. Fitting up through the highest order tested here (fifth order) gave the best results for all grid spacings. For QSM, using enhanced Shepard interpolation gave slightly better results than using the usual second order approximate, damped Broyden-Fletcher-Goldfarb-Shanno updated Hessian to approximate the surface. To test these methods we examined two analytic potentials, the rotational dihedral potential of alanine dipeptide and the SN2 reaction of methyl chloride with fluoride.

  14. Difference mapping method using least square support vector regression for variable-fidelity metamodelling

    NASA Astrophysics Data System (ADS)

    Zheng, Jun; Shao, Xinyu; Gao, Liang; Jiang, Ping; Qiu, Haobo

    2015-06-01

    Engineering design, especially for complex engineering systems, is usually a time-consuming process involving computation-intensive computer-based simulation and analysis methods. A difference mapping method using least square support vector regression is developed in this work, as a special metamodelling methodology that includes variable-fidelity data, to replace the computationally expensive computer codes. A general difference mapping framework is proposed where a surrogate base is first created, then the approximation is gained by a mapping the difference between the base and the real high-fidelity response surface. The least square support vector regression is adopted to accomplish the mapping. Two different sampling strategies, nested and non-nested design of experiments, are conducted to explore their respective effects on modelling accuracy. Different sample sizes and three approximation performance measures of accuracy are considered.

  15. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    NASA Astrophysics Data System (ADS)

    Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing

    2015-06-01

    In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  16. Real-Time Adaptive Least-Squares Drag Minimization for Performance Adaptive Aeroelastic Wing

    NASA Technical Reports Server (NTRS)

    Ferrier, Yvonne L.; Nguyen, Nhan T.; Ting, Eric

    2016-01-01

    This paper contains a simulation study of a real-time adaptive least-squares drag minimization algorithm for an aeroelastic model of a flexible wing aircraft. The aircraft model is based on the NASA Generic Transport Model (GTM). The wing structures incorporate a novel aerodynamic control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF). The drag minimization algorithm uses the Newton-Raphson method to find the optimal VCCTEF deflections for minimum drag in the context of an altitude-hold flight control mode at cruise conditions. The aerodynamic coefficient parameters used in this optimization method are identified in real-time using Recursive Least Squares (RLS). The results demonstrate the potential of the VCCTEF to improve aerodynamic efficiency for drag minimization for transport aircraft.

  17. Empirical mode decomposition-adaptive least squares method for dynamic calibration of pressure sensors

    NASA Astrophysics Data System (ADS)

    Yao, Zhenjian; Wang, Zhongyu; Yi-Lin Forrest, Jeffrey; Wang, Qiyue; Lv, Jing

    2017-04-01

    In this paper, an approach combining empirical mode decomposition (EMD) with adaptive least squares (ALS) is proposed to improve the dynamic calibration accuracy of pressure sensors. With EMD, the original output of the sensor can be represented as sums of zero-mean amplitude modulation frequency modulation components. By identifying and excluding those components involved in noises, the noise-free output could be reconstructed with the useful frequency modulation ones. Then the least squares method is iteratively performed to estimate the optimal order and parameters of the mathematical model. The dynamic characteristic parameters of the sensor can be derived from the model in both time and frequency domains. A series of shock tube calibration tests are carried out to validate the performance of this method. Experimental results show that the proposed method works well in reducing the influence of noise and yields an appropriate mathematical model. Furthermore, comparative experiments also demonstrate the superiority of the proposed method over the existing ones.

  18. Geodesic least squares regression for scaling studies in magnetic confinement fusion

    SciTech Connect

    Verdoolaege, Geert

    2015-01-13

    In regression analyses for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. However, concerns have been raised with respect to several assumptions underlying OLS in its application to scaling laws. We here discuss a new regression method that is robust in the presence of significant uncertainty on both the data and the regression model. The method, which we call geodesic least squares regression (GLS), is based on minimization of the Rao geodesic distance on a probabilistic manifold. We demonstrate the superiority of the method using synthetic data and we present an application to the scaling law for the power threshold for the transition to the high confinement regime in magnetic confinement fusion devices.

  19. Evaluation of fatty proportion in fatty liver using least squares method with constraints.

    PubMed

    Li, Xingsong; Deng, Yinhui; Yu, Jinhua; Wang, Yuanyuan; Shamdasani, Vijay

    2014-01-01

    Backscatter and attenuation parameters are not easily measured in clinical applications due to tissue inhomogeneity in the region of interest (ROI). A least squares method(LSM) that fits the echo signal power spectra from a ROI to a 3-parameter tissue model was used to get attenuation coefficient imaging in fatty liver. Since fat's attenuation value is higher than normal liver parenchyma, a reasonable threshold was chosen to evaluate the fatty proportion in fatty liver. Experimental results using clinical data of fatty liver illustrate that the least squares method can get accurate attenuation estimates. It is proved that the attenuation values have a positive correlation with the fatty proportion, which can be used to evaluate the syndrome of fatty liver.

  20. Least squares algorithm for region-of-interest evaluation in emission tomography

    SciTech Connect

    Formiconi, A.R. . Dipt. di Fisiopatologia Clinica)

    1993-03-01

    In a simulation study, the performances of the least squares algorithm applied to region-of-interest evaluation were studied. The least squares algorithm is a direct algorithm which does not require any iterative computation scheme and also provides estimates of statistical uncertainties of the region-of-interest values (covariance matrix). A model of physical factors, such as system resolution, attenuation and scatter, can be specified in the algorithm. In this paper an accurate model of the non-stationary geometrical response of a camera-collimator system was considered. The algorithm was compared with three others which are specialized for region-of-interest evaluation, as well as with the conventional method of summing the reconstructed quantity over the regions of interest. For the latter method, two algorithms were used for image reconstruction; these included filtered back projection and conjugate gradient least squares with the model of nonstationary geometrical response. For noise-free data and for regions of accurate shape least squares estimates were unbiased within roundoff errors. For noisy data, estimates were still unbiased but precision worsened for regions smaller than resolution: simulating typical statistics of brain perfusion studies performed with a collimated camera, the estimated standard deviation for a 1 cm square region was 10% with an ultra high-resolution collimator and 7% with a low energy all purpose collimator. Conventional region-of-interest estimates showed comparable precision but were heavily biased if filtered back projection was employed for image reconstruction. Using the conjugate gradient iterative algorithm and the model of nonstationary geometrical response, bias of estimates decreased on increasing the number of iterations, but precision worsened thus achieving an estimated standard deviation of more than 25% for the same 1 cm region.

  1. Least Squares Adaptive and Bayes Optimal Array Processors for the Active Sonar Problem

    DTIC Science & Technology

    1989-10-01

    False Alarm: Q10 = f po(z)dz (3.15) RI Miss: Q 1 = f pl(z)dz (3.16) RO Detection: Q11 = f P(z)dz (3.17) RI Null Decision: QDo f po(z)dz (3.18) RO where...least squares lattice filter structures for adaptive processing of complex acoustic data, The Pennsylvania State University, 1984. (Masters Thesis ) 54

  2. Optimal Knot Selection for Least-squares Fitting of Noisy Data with Spline Functions

    SciTech Connect

    Jerome Blair

    2008-05-15

    An automatic data-smoothing algorithm for data from digital oscilloscopes is described. The algorithm adjusts the bandwidth of the filtering as a function of time to provide minimum mean squared error at each time. It produces an estimate of the root-mean-square error as a function of time and does so without any statistical assumptions about the unknown signal. The algorithm is based on least-squares fitting to the data of cubic spline functions.

  3. Separable least squares identification of long memory block structured models: application to lung tissue viscoelasticity.

    PubMed

    Westwick, David T; Suki, Bela

    2006-01-01

    A separable least squares algorithm is developed for the identification of a Wiener model whose dynamic element is a constant phase model that has been modified to include a purely viscous term. The separation of variables reduces the dimensionality of the search space from 5 to 2, greatly simplifying the optimization procedure used to estimate the parameters, The algorithm is tested on experimental stress/strain data from a strip of lung parenchyma.

  4. A quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints

    NASA Technical Reports Server (NTRS)

    Hanson, R. J.; Krogh, Fred T.

    1992-01-01

    A new algorithm for solving nonlinear least-squares and nonlinear equation problems is proposed which is based on approximating the nonlinear functions using the quadratic-tensor model by Schnabel and Frank. The algorithm uses a trust region defined by a box containing the current values of the unknowns. The algorithm is found to be effective for problems with linear constraints and dense Jacobian matrices.

  5. Imaging of stellar surfaces with the Occamian approach and the least-squares deconvolution technique

    NASA Astrophysics Data System (ADS)

    Järvinen, S. P.; Berdyugina, S. V.

    2010-10-01

    Context. We present in this paper a new technique for the indirect imaging of stellar surfaces (Doppler imaging, DI), when low signal-to-noise spectral data have been improved by the least-squares deconvolution (LSD) method and inverted into temperature maps with the Occamian approach. We apply this technique to both simulated and real data and investigate its applicability for different stellar rotation rates and noise levels in data. Aims: Our goal is to boost the signal of spots in spectral lines and to reduce the effect of photon noise without loosing the temperature information in the lines. Methods: We simulated data from a test star, to which we added different amounts of noise, and employed the inversion technique based on the Occamian approach with and without LSD. In order to be able to infer a temperature map from LSD profiles, we applied the LSD technique for the first time to both the simulated observations and theoretical local line profiles, which remain dependent on temperature and limb angles. We also investigated how the excitation energy of individual lines effects the obtained solution by using three submasks that have lines with low, medium, and high excitation energy levels. Results: We show that our novel approach enables us to overcome the limitations of the two-temperature approximation, which was previously employed for LSD profiles, and to obtain true temperature maps with stellar atmosphere models. The resulting maps agree well with those obtained using the inversion code without LSD, provided the data are noiseless. However, using LSD is only advisable for poor signal-to-noise data. Further, we show that the Occamian technique, both with and without LSD, approaches the surface temperature distribution reasonably well for an adequate spatial resolution. Thus, the stellar rotation rate has a great influence on the result. For instance, in a slowly rotating star, closely situated spots are usually recovered blurred and unresolved, which

  6. Modeling nonbilinear total synchronous fluorescence data matrices with a novel adapted partial least squares method.

    PubMed

    Schenone, Agustina V; de Araújo Gomes, Adriano; Culzoni, María J; Campiglia, Andrés D; de Araújo, Mário Cesar Ugulino; Goicoechea, Héctor C

    2015-02-15

    A new residual modeling algorithm for nonbilinear data is presented, namely unfolded partial least squares with interference modeling of non bilinear data by multivariate curve resolution by alternating least squares (U-PLS/IMNB/MCR-ALS). Nonbilinearity represents a challenging data structure problem to achieve analyte quantitation from second-order data in the presence of uncalibrated components. Total synchronous fluorescence spectroscopy (TSFS) generates matrices which constitute a typical example of this kind of data. Although the nonbilinear profile of the interferent can be achieved by modeling TSFS data with unfolded partial least squares with residual bilinearization (U-PLS/RBL), an extremely large number of RBL factors has to be considered. Simulated data show that the new model can conveniently handle the studied analytical problem with better performance than PARAFAC, U-PLS/RBL and MCR-ALS, the latter modeling the unfolded data. Besides, one example involving TSFS real matrices illustrates the ability of the new method to handle experimental data, which consists in the determination of ciprofloxacin in the presence of norfloxacin as interferent in water samples.

  7. A new linear least squares method for T1 estimation from SPGR signals with multiple TRs

    NASA Astrophysics Data System (ADS)

    Chang, Lin-Ching; Koay, Cheng Guan; Basser, Peter J.; Pierpaoli, Carlo

    2009-02-01

    The longitudinal relaxation time, T1, can be estimated from two or more spoiled gradient recalled echo x (SPGR) images with two or more flip angles and one or more repetition times (TRs). The function relating signal intensity and the parameters are nonlinear; T1 maps can be computed from SPGR signals using nonlinear least squares regression. A widely-used linear method transforms the nonlinear model by assuming a fixed TR in SPGR images. This constraint is not desirable since multiple TRs are a clinically practical way to reduce the total acquisition time, to satisfy the required resolution, and/or to combine SPGR data acquired at different times. A new linear least squares method is proposed using the first order Taylor expansion. Monte Carlo simulations of SPGR experiments are used to evaluate the accuracy and precision of the estimated T1 from the proposed linear and the nonlinear methods. We show that the new linear least squares method provides T1 estimates comparable in both precision and accuracy to those from the nonlinear method, allowing multiple TRs and reducing computation time significantly.

  8. Analysis and computation of a least-squares method for consistent mesh tying

    NASA Astrophysics Data System (ADS)

    Day, David; Bochev, Pavel

    2008-08-01

    In the finite element method, a standard approach to mesh tying is to apply Lagrange multipliers. If the interface is curved, however, discretization generally leads to adjoining surfaces that do not coincide spatially. Straightforward Lagrange multiplier methods lead to discrete formulations failing a first-order patch test [T.A. Laursen, M.W. Heinstein, Consistent mesh-tying methods for topologically distinct discretized surfaces in non-linear solid mechanics, Internat. J. Numer. Methods Eng. 57 (2003) 1197-1242]. This paper presents a theoretical and computational study of a least-squares method for mesh tying [P. Bochev, D.M. Day, A least-squares method for consistent mesh tying, Internat. J. Numer. Anal. Modeling 4 (2007) 342-352], applied to the partial differential equation -[backward difference]2[phi]+[alpha][phi]=f. We prove optimal convergence rates for domains represented as overlapping subdomains and show that the least-squares method passes a patch test of the order of the finite element space by construction. To apply the method to subdomain configurations with gaps and overlaps we use interface perturbations to eliminate the gaps. Theoretical error estimates are illustrated by numerical experiments.

  9. Semi-supervised least squares support vector machine algorithm: application to offshore oil reservoir

    NASA Astrophysics Data System (ADS)

    Luo, Wei-Ping; Li, Hong-Qi; Shi, Ning

    2016-06-01

    At the early stages of deep-water oil exploration and development, fewer and further apart wells are drilled than in onshore oilfields. Supervised least squares support vector machine algorithms are used to predict the reservoir parameters but the prediction accuracy is low. We combined the least squares support vector machine (LSSVM) algorithm with semi-supervised learning and established a semi-supervised regression model, which we call the semi-supervised least squares support vector machine (SLSSVM) model. The iterative matrix inversion is also introduced to improve the training ability and training time of the model. We use the UCI data to test the generalization of a semi-supervised and a supervised LSSVM models. The test results suggest that the generalization performance of the LSSVM model greatly improves and with decreasing training samples the generalization performance is better. Moreover, for small-sample models, the SLSSVM method has higher precision than the semi-supervised K-nearest neighbor (SKNN) method. The new semisupervised LSSVM algorithm was used to predict the distribution of porosity and sandstone in the Jingzhou study area.

  10. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    SciTech Connect

    Santos-Villalobos, Hector J; Gregor, Jens; Bingham, Philip R

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  11. A simple suboptimal least-squares algorithm for attitude determination with multiple sensors

    NASA Technical Reports Server (NTRS)

    Brozenec, Thomas F.; Bender, Douglas J.

    1994-01-01

    Three-axis attitude determination is equivalent to finding a coordinate transformation matrix which transforms a set of reference vectors fixed in inertial space to a set of measurement vectors fixed in the spacecraft. The attitude determination problem can be expressed as a constrained optimization problem. The constraint is that a coordinate transformation matrix must be proper, real, and orthogonal. A transformation matrix can be thought of as optimal in the least-squares sense if it maps the measurement vectors to the reference vectors with minimal 2-norm errors and meets the above constraint. This constrained optimization problem is known as Wahba's problem. Several algorithms which solve Wahba's problem exactly have been developed and used. These algorithms, while steadily improving, are all rather complicated. Furthermore, they involve such numerically unstable or sensitive operations as matrix determinant, matrix adjoint, and Newton-Raphson iterations. This paper describes an algorithm which minimizes Wahba's loss function, but without the constraint. When the constraint is ignored, the problem can be solved by a straightforward, numerically stable least-squares algorithm such as QR decomposition. Even though the algorithm does not explicitly take the constraint into account, it still yields a nearly orthogonal matrix for most practical cases; orthogonality only becomes corrupted when the sensor measurements are very noisy, on the same order of magnitude as the attitude rotations. The algorithm can be simplified if the attitude rotations are small enough so that the approximation sin(theta) approximately equals theta holds. We then compare the computational requirements for several well-known algorithms. For the general large-angle case, the QR least-squares algorithm is competitive with all other know algorithms and faster than most. If attitude rotations are small, the least-squares algorithm can be modified to run faster, and this modified algorithm is

  12. Interpolating moving least-squares methods for fitting potential-energy surfaces: further improvement of efficiency via cutoff strategies.

    PubMed

    Kawano, Akio; Tokmakov, Igor V; Thompson, Donald L; Wagner, Albert F; Minkoff, Michael

    2006-02-07

    In standard applications of interpolating moving least squares (IMLS) for fitting a potential-energy surface (PES), all available ab initio points are used. Because remote ab initio points negligibly influence IMLS accuracy and increase IMLS time-to-solution, we present two methods to locally restrict the number of points included in a particular fit. The fixed radius cutoff (FRC) method includes ab initio points within a hypersphere of fixed radius. The density adaptive cutoff (DAC) method includes points within a hypersphere of variable radius depending on the point density. We test these methods by fitting a six-dimensional analytical PES for hydrogen peroxide. Both methods reduce the IMLS time-to-solution by about an order of magnitude relative to that when no cutoff method is used. The DAC method is more robust and efficient than the FRC method.

  13. Comparison of Response Surface Construction Methods for Derivative Estimation Using Moving Least Squares, Kriging and Radial Basis Functions

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2005-01-01

    Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.

  14. Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).

    PubMed

    Bevilacqua, Marta; Marini, Federico

    2014-08-01

    The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks).

  15. Clustering technique-based least square support vector machine for EEG signal classification.

    PubMed

    Siuly; Li, Yan; Wen, Peng Paul

    2011-12-01

    This paper presents a new approach called clustering technique-based least square support vector machine (CT-LS-SVM) for the classification of EEG signals. Decision making is performed in two stages. In the first stage, clustering technique (CT) has been used to extract representative features of EEG data. In the second stage, least square support vector machine (LS-SVM) is applied to the extracted features to classify two-class EEG signals. To demonstrate the effectiveness of the proposed method, several experiments have been conducted on three publicly available benchmark databases, one for epileptic EEG data, one for mental imagery tasks EEG data and another one for motor imagery EEG data. Our proposed approach achieves an average sensitivity, specificity and classification accuracy of 94.92%, 93.44% and 94.18%, respectively, for the epileptic EEG data; 83.98%, 84.37% and 84.17% respectively, for the motor imagery EEG data; and 64.61%, 58.77% and 61.69%, respectively, for the mental imagery tasks EEG data. The performance of the CT-LS-SVM algorithm is compared in terms of classification accuracy and execution (running) time with our previous study where simple random sampling with a least square support vector machine (SRS-LS-SVM) was employed for EEG signal classification. We also compare the proposed method with other existing methods in the literature for the three databases. The experimental results show that the proposed algorithm can produce a better classification rate than the previous reported methods and takes much less execution time compared to the SRS-LS-SVM technique. The research findings in this paper indicate that the proposed approach is very efficient for classification of two-class EEG signals.

  16. Small-kernel, constrained least-squares restoration of sampled image data

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Park, Stephen K.

    1992-01-01

    Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.

  17. A Pascal program for the least-squares evaluation of standard RBS spectra

    NASA Astrophysics Data System (ADS)

    Hnatowicz, V.; Havránek, V.; Kvítek, J.

    1992-11-01

    A computer program for least-squares fitting of energy spectra obtained in common Rutherford backscattering (RBS) analyses is described. The samples analyzed by RBS technique are considered to be made up of a finite number of layers, each with uniform composition. The RBS spectra are treated as a combination of variable number of three different basic figures (strip, bulge and Gaussian) which are represented by ad-hoc chosen analytical expressions. The initial parameter estimates are inserted by the operator (with an assistance of graphical support on a TV screen) and the result of the fit is displayed on the screen and stored as a table on hard disk.

  18. A least-squares finite element method for incompressible Navier-Stokes problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan

    1989-01-01

    A least-squares finite element method, based on the velocity-pressure-vorticity formulation, is developed for solving steady incompressible Navier-Stokes problems. This method leads to a minimization problem rather than to a saddle-point problem by the classic mixed method, and can thus accommodate equal-order interpolations. This method has no parameter to tune. The associated algebraic system is symmetric, and positive definite. Numerical results for the cavity flow at Reynolds number up to 10,000 and the backward-facing step flow at Reynolds number up to 900 are presented.

  19. Application of the Marquardt least-squares method to the estimation of pulse function parameters

    NASA Astrophysics Data System (ADS)

    Lundengârd, Karl; Rančić, Milica; Javor, Vesna; Silvestrov, Sergei

    2014-12-01

    Application of the Marquardt least-squares method (MLSM) to the estimation of non-linear parameters of functions used for representing various lightning current waveshapes is presented in this paper. Parameters are determined for the Pulse, Heidler's and DEXP function representing the first positive, first and subsequent negative stroke currents as given in IEC 62305-1 Standard Ed.2, and also for some other fast- and slow-decaying lightning current waveshapes. The results prove the ability of the MLSM to be used for the estimation of parameters of the functions important in lightning discharge modeling.

  20. Useful and little-known applications of the Least Square Method and some consequences of covariances

    NASA Astrophysics Data System (ADS)

    Helene, Otaviano; Mariano, Leandro; Guimarães-Filho, Zwinglio

    2016-10-01

    Covariances are as important as variances when dealing with experimental data and they must be considered in fitting procedures and adjustments in order to preserve the statistical properties of the adjusted quantities. In this paper, we apply the Least Square Method in matrix form to several simple problems in order to evaluate the consequences of covariances in the fitting procedure. Among the examples, we demonstrate how a measurement of a physical quantity can change the adopted value of all other covariant quantities and how a new single point (x , y) improves the parameters of a previously adjusted straight-line.

  1. Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.

    PubMed

    Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia

    2016-10-26

    Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors

  2. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  3. Partial least squares prediction of the first hyperpolarizabilities of donor-acceptor polyenic derivatives

    NASA Astrophysics Data System (ADS)

    Machado, A. E. de A.; da Gama, A. A. de S.; de Barros Neto, B.

    2011-09-01

    A partial least squares regression analysis of a large set of donor-acceptor organic molecules was performed to predict the magnitude of their static first hyperpolarizabilities ( β's). Polyenes, phenylpolyenes and biphenylpolyenes with augmented chain lengths displayed large β values, in agreement with the available experimental data. The regressors used were the HOMO-LUMO energy gap, the ground-state dipole moment, the HOMO energy AM1 values and the number of π-electrons. The regression equation predicts quite well the static β values for the molecules investigated and can be used to model new organic-based materials with enhanced nonlinear responses.

  4. Feature selection of signal-averaged electrocardiograms by orthogonal least squares method

    NASA Astrophysics Data System (ADS)

    Raczyk, Michal; Jankowski, Stanislaw; Piatkowska-Janko, Ewa

    2008-11-01

    A crucial problem in machine learning is finding the representative set of data for building a model for both classification and approximation task. In this paper we present the orthogonal least squares method for feature selection. The presented method was used for finding the most important features for selecting patients with sustained ventricular tachycardia after myocardial infarction (SVT+). We show that with the reduced set of descriptors used in the classification process we obtain the results that are better than those obtained with the full set.

  5. STRITERFIT, a least-squares pharmacokinetic curve-fitting package using a programmable calculator.

    PubMed

    Thornhill, D P; Schwerzel, E

    1985-05-01

    A program is described that permits iterative least-squares nonlinear regression fitting of polyexponential curves using the Hewlett Packard HP 41 CV programmable calculator. The program enables the analysis of pharmacokinetic drug level profiles with a high degree of precision. Up to 15 data pairs can be used, and initial estimates of curve parameters are obtained with a stripping procedure. Up to four exponential terms can be accommodated by the program, and there is the option of weighting data according to their reciprocals. Initial slopes cannot be forced through zero. The program may be interrupted at any time in order to examine convergence.

  6. Least-squares modal estimation of wrapped phases: application to phase unwrapping.

    PubMed

    Arines, Justo

    2003-06-10

    Phase unwrapping continues to be an important step in those techniques that obtain the phase from Fourier transforms. We propose a fast two-dimensional phase-unwrapping algorithm that has been specially designed to be used as part of an iterative algorithm. It can be used also as a final step of a phase retrieval process with other unwrapping techniques. The algorithm consists of a modal least-squares estimation of the wrapped phase by using as inputs to the linear estimation the derivative of the wrapped phase. A theoretical description of the method, simulations, and experimental validations are presented.

  7. A least-squares finite element method for incompressible Navier-Stokes problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan

    1992-01-01

    A least-squares finite element method, based on the velocity-pressure-vorticity formulation, is developed for solving steady incompressible Navier-Stokes problems. This method leads to a minimization problem rather than to a saddle-point problem by the classic mixed method and can thus accommodate equal-order interpolations. This method has no parameter to tune. The associated algebraic system is symmetric, and positive definite. Numerical results for the cavity flow at Reynolds number up to 10,000 and the backward-facing step flow at Reynolds number up to 900 are presented.

  8. Application of Least-Squares Adjustment Technique to Geometric Camera Calibration and Photogrammetric Flow Visualization

    NASA Technical Reports Server (NTRS)

    Chen, Fang-Jenq

    1997-01-01

    Flow visualization produces data in the form of two-dimensional images. If the optical components of a camera system are perfect, the transformation equations between the two-dimensional image and the three-dimensional object space are linear and easy to solve. However, real camera lenses introduce nonlinear distortions that affect the accuracy of transformation unless proper corrections are applied. An iterative least-squares adjustment algorithm is developed to solve the nonlinear transformation equations incorporated with distortion corrections. Experimental applications demonstrate that a relative precision on the order of 40,000 is achievable without tedious laboratory calibrations of the camera.

  9. Monotone spline-based least squares estimation for panel count data with informative observation times.

    PubMed

    Deng, Shirong; Liu, Li; Zhao, Xingqiu

    2015-09-01

    This article discusses the statistical analysis of panel count data when the underlying recurrent event process and observation process may be correlated. For the recurrent event process, we propose a new class of semiparametric mean models that allows for the interaction between the observation history and covariates. For inference on the model parameters, a monotone spline-based least squares estimation approach is developed, and the resulting estimators are consistent and asymptotically normal. In particular, our new approach does not rely on the model specification of the observation process. The proposed inference procedure performs well through simulation studies, and it is illustrated by the analysis of bladder tumor data.

  10. Retinal Oximetry with 510-600 nm Light Based on Partial Least-Squares Regression Technique

    NASA Astrophysics Data System (ADS)

    Arimoto, Hidenobu; Furukawa, Hiromitsu

    2010-11-01

    The oxygen saturation distribution in the retinal blood stream is estimated by measuring spectral images and adopting the partial-least squares regression. The wavelengths range used for the calculation is from 510 to 600 nm. The regression model for estimating the retinal oxygen saturation is built on the basis of the arterial and venous blood spectra. The experiment is performed using an originally designed spectral ophthalmoscope. The obtained two-dimensional (2D) oxygen saturation indicates the reasonable oxygen level across the retina. The measurement quality is compared with those obtained using other wavelengths sets and data processing methods.

  11. Least-Squares PN Formulation of the Transport Equation Using Self-Adjoint-Angular-Flux Consistent Boundary Conditions.

    SciTech Connect

    Vincent M. Laboure; Yaqi Wang; Mark D. DeHart

    2016-05-01

    In this paper, we study the Least-Squares (LS) PN form of the transport equation compatible with voids in the context of Continuous Finite Element Methods (CFEM).We first deriveweakly imposed boundary conditions which make the LS weak formulation equivalent to the Self-Adjoint Angular Flux (SAAF) variational formulation with a void treatment, in the particular case of constant cross-sections and a uniform mesh. We then implement this method in Rattlesnake with the Multiphysics Object Oriented Simulation Environment (MOOSE) framework using a spherical harmonics (PN) expansion to discretize in angle. We test our implementation using the Method of Manufactured Solutions (MMS) and find the expected convergence behavior both in angle and space. Lastly, we investigate the impact of the global non-conservation of LS by comparing the method with SAAF on a heterogeneous test problem.

  12. A Karhunen-Loève least-squares technique for optimization of geometry of a blunt body in supersonic flow

    NASA Astrophysics Data System (ADS)

    Brooks, Gregory P.; Powers, Joseph M.

    2004-03-01

    A novel Karhunen-Loève (KL) least-squares model for the supersonic flow of an inviscid, calorically perfect ideal gas about an axisymmetric blunt body employing shock-fitting is developed; the KL least-squares model is used to accurately select an optimal configuration which minimizes drag. Accuracy and efficiency of the KL method is compared to a pseudospectral method employing global Lagrange interpolating polynomials. KL modes are derived from pseudospectral solutions at Mach 3.5 from a uniform sampling of the design space and subsequently employed as the trial functions for a least-squares method of weighted residuals. Results are presented showing the high accuracy of the method with less than 10 KL modes. Close agreement is found between the optimal geometry found using the KL model to that found from the pseudospectral solver. Not including the cost of sampling the design space and building the KL model, the KL least-squares method requires less than half the central processing unit time as the pseudospectral method to achieve the same level of accuracy. A decrease in computational cost of several orders of magnitude as reported in the literature when comparing the KL method against discrete solvers is shown not to hold for the current problem. The efficiency is lost because the nature of the nonlinearity renders a priori evaluation of certain necessary integrals impossible, requiring as a consequence many costly reevaluations of the integrals.

  13. Weighted Least Squares Estimates of the Magnetotelluric Transfer Functions from Nonstationary Data

    SciTech Connect

    Stodt, John A.

    1982-11-01

    Magnetotelluric field measurements can generally be viewed as sums of signal and additive random noise components. The standard unweighted least squares estimates of the impedance and tipper functions which are usually calculated from noisy data are not optimal when the measured fields are nonstationary. The nonstationary behavior of the signals and noises should be exploited by weighting the data appropriately to reduce errors in the estimates of the impedances and tippers. Insight into the effects of noise on the estimates is gained by careful development of a statistical model, within a linear system framework, which allows for nonstationary behavior of both the signal and noise components of the measured fields. The signal components are, by definition, linearly related to each other by the impedance and tipper functions. It is therefore appropriate to treat them as deterministic parameters, rather than as random variables, when analyzing the effects of noise on the calculated impedances and tippers. From this viewpoint, weighted least squares procedures are developed to reduce the errors in impedances and tippers which are calculated from nonstationary data.

  14. Online segmentation of time series based on polynomial least-squares approximations.

    PubMed

    Fuchs, Erich; Gruber, Thiemo; Nitschke, Jiri; Sick, Bernhard

    2010-12-01

    The paper presents SwiftSeg, a novel technique for online time series segmentation and piecewise polynomial representation. The segmentation approach is based on a least-squares approximation of time series in sliding and/or growing time windows utilizing a basis of orthogonal polynomials. This allows the definition of fast update steps for the approximating polynomial, where the computational effort depends only on the degree of the approximating polynomial and not on the length of the time window. The coefficients of the orthogonal expansion of the approximating polynomial-obtained by means of the update steps-can be interpreted as optimal (in the least-squares sense) estimators for average, slope, curvature, change of curvature, etc., of the signal in the time window considered. These coefficients, as well as the approximation error, may be used in a very intuitive way to define segmentation criteria. The properties of SwiftSeg are evaluated by means of some artificial and real benchmark time series. It is compared to three different offline and online techniques to assess its accuracy and runtime. It is shown that SwiftSeg-which is suitable for many data streaming applications-offers high accuracy at very low computational costs.

  15. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization

    PubMed Central

    Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L2 and L1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996

  16. Data-adapted moving least squares method for 3-D image interpolation

    NASA Astrophysics Data System (ADS)

    Jang, Sumi; Nam, Haewon; Lee, Yeon Ju; Jeong, Byeongseon; Lee, Rena; Yoon, Jungho

    2013-12-01

    In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons.

  17. Modeling individual HRTF tensor using high-order partial least squares

    NASA Astrophysics Data System (ADS)

    Huang, Qinghua; Li, Lin

    2014-12-01

    A tensor is used to describe head-related transfer functions (HRTFs) depending on frequencies, sound directions, and anthropometric parameters. It keeps the multi-dimensional structure of measured HRTFs. To construct a multi-linear HRTF personalization model, an individual core tensor is extracted from the original HRTFs using high-order singular value decomposition (HOSVD). The individual core tensor in lower-dimensional space acts as the output of the multi-linear model. Some key anthropometric parameters as the inputs of the model are selected by Laplacian scores and correlation analyses between all the measured parameters and the individual core tensor. Then, the multi-linear regression model is constructed by high-order partial least squares (HOPLS), aiming to seek a joint subspace approximation for both the selected parameters and the individual core tensor. The numbers of latent variables and loadings are used to control the complexity of the model and prevent overfitting feasibly. Compared with the partial least squares regression (PLSR) method, objective simulations demonstrate the better performance for predicting individual HRTFs especially for the sound directions ipsilateral to the concerned ear. The subjective listening tests show that the predicted individual HRTFs are approximate to the measured HRTFs for the sound localization.

  18. Two-Stage Orthogonal Least Squares Methods for Neural Network Construction.

    PubMed

    Zhang, Long; Li, Kang; Bai, Er-Wei; Irwin, George W

    2015-08-01

    A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.

  19. Weighted least-squares phase unwrapping algorithm based on derivative variance correlation map

    NASA Astrophysics Data System (ADS)

    Lu, Yuangang; Wang, Xiangzhao; Zhang, Xuping

    2007-02-01

    Among different phase unwrapping approaches, the weighted least-squares minimization methods are gaining attention. In these algorithms, weighting coefficient is generated from a quality map. The intrinsic drawbacks of existing quality maps constrain the application of these algorithms. They often fail to handle wrapped phase data contains error sources, such as phase discontinuities, noise and undersampling. In order to deal with those intractable wrapped phase data, a new weighted least-squares phase unwrapping algorithm based on derivative variance correlation map is proposed. In the algorithm, derivative variance correlation map, a novel quality map, can truly reflect wrapped phase quality, ensuring a more reliable unwrapped result. The definition of the derivative variance correlation map and the principle of the proposed algorithm are present in detail. The performance of the new algorithm has been tested by use of a simulated spherical surface wrapped data and an experimental interferometric synthetic aperture radar (IFSAR) wrapped data. Computer simulation and experimental results have verified that the proposed algorithm can work effectively even when a wrapped phase map contains intractable error sources.

  20. Application of copulas to improve covariance estimation for partial least squares.

    PubMed

    D'Angelo, Gina M; Weissfeld, Lisa A

    2013-02-20

    Dimension reduction techniques, such as partial least squares, are useful for computing summary measures and examining relationships in complex settings. Partial least squares requires an estimate of the covariance matrix as a first step in the analysis, making this estimate critical to the results. In addition, the covariance matrix also forms the basis for other techniques in multivariate analysis, such as principal component analysis and independent component analysis. This paper has been motivated by an example from an imaging study in Alzheimer's disease where there is complete separation between Alzheimer's and control subjects for one of the imaging modalities. This separation occurs in one block of variables and does not occur with the second block of variables resulting in inaccurate estimates of the covariance. We propose the use of a copula to obtain estimates of the covariance in this setting, where one set of variables comes from a mixture distribution. Simulation studies show that the proposed estimator is an improvement over the standard estimators of covariance. We illustrate the methods from the motivating example from a study in the area of Alzheimer's disease.

  1. Online Least Squares One-Class Support Vector Machines-Based Abnormal Visual Event Detection

    PubMed Central

    Wang, Tian; Chen, Jie; Zhou, Yi; Snoussi, Hichem

    2013-01-01

    The abnormal event detection problem is an important subject in real-time video surveillance. In this paper, we propose a novel online one-class classification algorithm, online least squares one-class support vector machine (online LS-OC-SVM), combined with its sparsified version (sparse online LS-OC-SVM). LS-OC-SVM extracts a hyperplane as an optimal description of training objects in a regularized least squares sense. The online LS-OC-SVM learns a training set with a limited number of samples to provide a basic normal model, then updates the model through remaining data. In the sparse online scheme, the model complexity is controlled by the coherence criterion. The online LS-OC-SVM is adopted to handle the abnormal event detection problem. Each frame of the video is characterized by the covariance matrix descriptor encoding the moving information, then is classified into a normal or an abnormal frame. Experiments are conducted, on a two-dimensional synthetic distribution dataset and a benchmark video surveillance dataset, to demonstrate the promising results of the proposed online LS-OC-SVM method. PMID:24351629

  2. Step-heating infrared thermographic inspection of steel structures by applying least-squares regression.

    PubMed

    Zhao, Hanxue; Zhou, Zhenggan; Fan, Jin; Li, Gen; Sun, Guangkai

    2017-02-01

    This paper reports the application of the least-squares regression method in the step-heating thermographic inspection of steel structures. The surface temperature variation of a slab with finite thickness during both the step-heating phase and the cooling-down phase is presented. A mild steel slab with holes of various depths and diameters is chosen as the specimen. The step-heating thermographic inspection experiments are carried out on the specimen with different heating times. The heating as well as the cooling-down phases are recorded with an infrared camera and are analyzed separately by linear regression of the double logarithmic temperature increase versus time plots. Three statistics of the linear regression, the slope, the coefficient of determination, and the F-test value, are used to create image maps according to the processing results. The signal-to-noise ratio of each map is calculated to evaluate the performance of the three imaging methods with different durations of heating time and cooling time. The results prove that the F-test value maps present a good performance for the sequences of the step-heating phase, while the slope maps present a good performance for the sequences of the cooling-down phase. The optimal heating time and cooling time for a steel structure are also concluded. The comparison with the results of the thermographic signal reconstruction (TSR) method proves that the least-squares regression method has better detectability and a higher inspection efficiency.

  3. Power-law modeling based on least-squares minimization criteria.

    PubMed

    Hernández-Bermejo, B; Fairén, V; Sorribas, A

    1999-10-01

    The power-law formalism has been successfully used as a modeling tool in many applications. The resulting models, either as Generalized Mass Action or as S-systems models, allow one to characterize the target system and to simulate its dynamical behavior in response to external perturbations and parameter changes. The power-law formalism was first derived as a Taylor series approximation in logarithmic space for kinetic rate-laws. The especial characteristics of this approximation produce an extremely useful systemic representation that allows a complete system characterization. Furthermore, their parameters have a precise interpretation as local sensitivities of each of the individual processes and as rate-constants. This facilitates a qualitative discussion and a quantitative estimation of their possible values in relation to the kinetic properties. Following this interpretation, parameter estimation is also possible by relating the systemic behavior to the underlying processes. Without leaving the general formalism, in this paper we suggest deriving the power-law representation in an alternative way that uses least-squares minimization. The resulting power-law mimics the target rate-law in a wider range of concentration values than the classical power-law. Although the implications of this alternative approach remain to be established, our results show that the predicted steady-state using the least-squares power-law is closest to the actual steady-state of the target system.

  4. Comparison of approaches for parameter estimation on stochastic models: Generic least squares versus specialized approaches.

    PubMed

    Zimmer, Christoph; Sahle, Sven

    2016-04-01

    Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences.

  5. Matched Field Processing Based on Least Squares with a Small Aperture Hydrophone Array

    PubMed Central

    Wang, Qi; Wang, Yingmin; Zhu, Guolei

    2016-01-01

    The receiver hydrophone array is the signal front-end and plays an important role in matched field processing, which usually covers the whole water column from the sea surface to the bottom. Such a large aperture array is very difficult to realize. To solve this problem, an approach called matched field processing based on least squares with a small aperture hydrophone array is proposed, which decomposes the received acoustic fields into depth function matrix and amplitudes of the normal modes at the beginning. Then all the mode amplitudes are estimated using the least squares in the sense of minimum norm, and the amplitudes estimated are used to recalculate the received acoustic fields of the small aperture array, which means the recalculated ones contain more environmental information. In the end, lots of numerical experiments with three small aperture arrays are processed in the classical shallow water, and the performance of matched field passive localization is evaluated. The results show that the proposed method can make the recalculated fields contain more acoustic information of the source, and the performance of matched field passive localization with small aperture array is improved, so the proposed algorithm is proved to be effective. PMID:28042828

  6. [Biomass Compositional Analysis Using Sparse Partial Least Squares Regression and Near Infrared Spectrum Technique].

    PubMed

    Yao, Yan; Wang, Chang-yue; Liu, Hui-jun; Tang, Jian-bin; Cai, Jin-hui; Wang, Jing-jun

    2015-07-01

    Forest bio-fuel, a new type renewable energy, has attracted increasing attention as a promising alternative. In this study, a new method called Sparse Partial Least Squares Regression (SPLS) is used to construct the proximate analysis model to analyze the fuel characteristics of sawdust combining Near Infrared Spectrum Technique. Moisture, Ash, Volatile and Fixed Carbon percentage of 80 samples have been measured by traditional proximate analysis. Spectroscopic data were collected by Nicolet NIR spectrometer. After being filtered by wavelet transform, all of the samples are divided into training set and validation set according to sample category and producing area. SPLS, Principle Component Regression (PCR), Partial Least Squares Regression (PLS) and Least Absolute Shrinkage and Selection Operator (LASSO) are presented to construct prediction model. The result advocated that SPLS can select grouped wavelengths and improve the prediction performance. The absorption peaks of the Moisture is covered in the selected wavelengths, well other compositions have not been confirmed yet. In a word, SPLS can reduce the dimensionality of complex data sets and interpret the relationship between spectroscopic data and composition concentration, which will play an increasingly important role in the field of NIR application.

  7. Tangent least-squares fitting filtering method for electrical speckle pattern interferometry phase fringe patterns

    NASA Astrophysics Data System (ADS)

    Tang, Chen; Wang, Wenping; Yan, Haiqing; Gu, Xiaohui

    2007-05-01

    An efficient method is proposed to reduce the noise from electrical speckle pattern interferometry (ESPI) phase fringe patterns obtained by any technique. We establish the filtering windows along the tangent direction of phase fringe patterns. The x and y coordinates of each point in the established filtering windows are defined as the sine and cosine of the half-wrapped phase multiplied by a random quantity, then phase value is calculated using these points' coordinates based on a least-squares fitting algorithm. We tested the proposed methods on the computer-simulated speckle phase fringe patterns and the experimentally obtained phase fringe pattern, respectively, and compared them with the improved sine/cosine average filtering method [Opt. Commun. 162, 205 (1999)] and the least-squares phase-fitting method [Opt. Lett. 20, 931 (1995)], which may be the most efficient methods. In all cases, our results are even better than the ones obtained with the two methods. Our method can overcome the main disadvantages encountered by the two methods.

  8. Using Quantile and Asymmetric Least Squares Regression for Optimal Risk Adjustment.

    PubMed

    Lorenz, Normann

    2016-06-13

    In this paper, we analyze optimal risk adjustment for direct risk selection (DRS). Integrating insurers' activities for risk selection into a discrete choice model of individuals' health insurance choice shows that DRS has the structure of a contest. For the contest success function (csf) used in most of the contest literature (the Tullock-csf), optimal transfers for a risk adjustment scheme have to be determined by means of a restricted quantile regression, irrespective of whether insurers are primarily engaged in positive DRS (attracting low risks) or negative DRS (repelling high risks). This is at odds with the common practice of determining transfers by means of a least squares regression. However, this common practice can be rationalized for a new csf, but only if positive and negative DRSs are equally important; if they are not, optimal transfers have to be calculated by means of a restricted asymmetric least squares regression. Using data from German and Swiss health insurers, we find considerable differences between the three types of regressions. Optimal transfers therefore critically depend on which csf represents insurers' incentives for DRS and, if it is not the Tullock-csf, whether insurers are primarily engaged in positive or negative DRS. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Dual stacked partial least squares for analysis of near-infrared spectra.

    PubMed

    Bi, Yiming; Xie, Qiong; Peng, Silong; Tang, Liang; Hu, Yong; Tan, Jie; Zhao, Yuhui; Li, Changwen

    2013-08-20

    A new ensemble learning algorithm is presented for quantitative analysis of near-infrared spectra. The algorithm contains two steps of stacked regression and Partial Least Squares (PLS), termed Dual Stacked Partial Least Squares (DSPLS) algorithm. First, several sub-models were generated from the whole calibration set. The inner-stack step was implemented on sub-intervals of the spectrum. Then the outer-stack step was used to combine these sub-models. Several combination rules of the outer-stack step were analyzed for the proposed DSPLS algorithm. In addition, a novel selective weighting rule was also involved to select a subset of all available sub-models. Experiments on two public near-infrared datasets demonstrate that the proposed DSPLS with selective weighting rule provided superior prediction performance and outperformed the conventional PLS algorithm. Compared with the single model, the new ensemble model can provide more robust prediction result and can be considered an alternative choice for quantitative analytical applications.

  10. Lameness detection challenges in automated milking systems addressed with partial least squares discriminant analysis.

    PubMed

    Garcia, E; Klaas, I; Amigo, J M; Bro, R; Enevoldsen, C

    2014-12-01

    Lameness causes decreased animal welfare and leads to higher production costs. This study explored data from an automatic milking system (AMS) to model on-farm gait scoring from a commercial farm. A total of 88 cows were gait scored once per week, for 2 5-wk periods. Eighty variables retrieved from AMS were summarized week-wise and used to predict 2 defined classes: nonlame and clinically lame cows. Variables were represented with 2 transformations of the week summarized variables, using 2-wk data blocks before gait scoring, totaling 320 variables (2 × 2 × 80). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3 or 4/4) or not lame (score 1/4). Both models achieved sensitivity and specificity values around 80%, both in calibration and cross-validation. At the optimum values in the receiver operating characteristic curve, the false-positive rate was 28% in the parity 1 model, whereas in the parity 2 model it was about half (16%), which makes it more suitable for practical application; the model error rates were, 23 and 19%, respectively. Based on data registered automatically from one AMS farm, we were able to discriminate nonlame and lame cows, where partial least squares discriminant analysis achieved similar performance to the reference method.

  11. Optimization of Active Muscle Force-Length Models Using Least Squares Curve Fitting.

    PubMed

    Mohammed, Goran Abdulrahman; Hou, Ming

    2016-03-01

    The objective of this paper is to propose an asymmetric Gaussian function as an alternative to the existing active force-length models, and to optimize this model along with several other existing models by using the least squares curve fitting method. The minimal set of coefficients is identified for each of these models to facilitate the least squares curve fitting. Sarcomere simulated data and one set of rabbits extensor digitorum II experimental data are used to illustrate optimal curve fitting of the selected force-length functions. The results shows that all the curves fit reasonably well with the simulated and experimental data, while the Gordon-Huxley-Julian model and asymmetric Gaussian function are better than other functions in terms of statistical test scores root mean squared error and R-squared. However, the differences in RMSE scores are insignificant (0.3-6%) for simulated data and (0.2-5%) for experimental data. The proposed asymmetric Gaussian model and the method of parametrization of this and the other force-length models mentioned above can be used in the studies on active force-length relationships of skeletal muscles that generate forces to cause movements of human and animal bodies.

  12. Matched Field Processing Based on Least Squares with a Small Aperture Hydrophone Array.

    PubMed

    Wang, Qi; Wang, Yingmin; Zhu, Guolei

    2016-12-30

    The receiver hydrophone array is the signal front-end and plays an important role in matched field processing, which usually covers the whole water column from the sea surface to the bottom. Such a large aperture array is very difficult to realize. To solve this problem, an approach called matched field processing based on least squares with a small aperture hydrophone array is proposed, which decomposes the received acoustic fields into depth function matrix and amplitudes of the normal modes at the beginning. Then all the mode amplitudes are estimated using the least squares in the sense of minimum norm, and the amplitudes estimated are used to recalculate the received acoustic fields of the small aperture array, which means the recalculated ones contain more environmental information. In the end, lots of numerical experiments with three small aperture arrays are processed in the classical shallow water, and the performance of matched field passive localization is evaluated. The results show that the proposed method can make the recalculated fields contain more acoustic information of the source, and the performance of matched field passive localization with small aperture array is improved, so the proposed algorithm is proved to be effective.

  13. Partial least-squares regression for linking land-cover patterns to soil erosion and sediment yield in watersheds

    NASA Astrophysics Data System (ADS)

    Shi, Z. H.; Ai, L.; Li, X.; Huang, X. D.; Wu, G. L.; Liao, W.

    2013-08-01

    There are strong ties between land cover patterns and soil erosion and sediment yield in watersheds. The spatial configuration of land cover has recently become an important aspect of the study of geomorphological processes related to erosion within watersheds. Many studies have used multivariate regression techniques to explore the response of soil erosion and sediment yield to land cover patterns in watersheds. However, many landscape metrics are highly correlated and may result in redundancy, which violates the assumptions of a traditional least-squares approach, thus leading to singular solutions or otherwise biased parameter estimates and confidence intervals. Here, we investigated the landscape patterns within watersheds in the Upper Du River watershed (8973 km2) in China and examined how the spatial patterns of land cover are related to the soil erosion and sediment yield of watersheds using hydrological modeling and partial least-squares regression (PLSR). The results indicate that the watershed soil erosion and sediment yield are closely associated with the land cover patterns. At the landscape level, landscape characteristics, such as Shannon’s diversity index (SHDI), aggregation index (AI), largest patch index (LPI), contagion (CONTAG), and patch cohesion index (COHESION), were identified as the primary metrics controlling the watershed soil erosion and sediment yield. The landscape characteristics in watersheds could account for as much as 65% and 74% of the variation in soil erosion and sediment yield, respectively. Greater interspersion and an increased number of patch land cover types may significantly accelerate soil erosion and increase sediment export. PLSR can be used to simply determine the relationships between land-cover patterns and watershed soil erosion and sediment yield, providing quantitative information to allow decision makers to make better choices regarding landscape planning. With readily available remote sensing data and rapid

  14. Genetic and least squares algorithms for estimating spectral EIS parameters of prostatic tissues.

    PubMed

    Halter, Ryan J; Hartov, Alex; Paulsen, Keith D; Schned, Alan; Heaney, John

    2008-06-01

    We employed electrical impedance spectroscopy (EIS) to evaluate the electrical properties of prostatic tissues. We collected freshly excised prostates from 23 men immediately following radical prostatectomy. The prostates were sectioned into 3 mm slices and electrical property measurements of complex resistivity were recorded from each of the slices using an impedance probe over the frequency range of 100 Hz to 100 kHz. The area probed was marked so that following tissue fixation and slide preparation, histological assessment could be correlated directly with the recorded EIS spectra. Prostate cancer (CaP), benign prostatic hyperplasia (BPH), non-hyperplastic glandular tissue and stroma were the primary prostatic tissue types probed. Genetic and least squares parameter estimation algorithms were implemented for fitting a Cole-type resistivity model to the measured data. The four multi-frequency-based spectral parameters defining the recorded spectrum (rho(infinity), Deltarho, f(c) and alpha) were determined using these algorithms and statistically analyzed with respect to the tissue type. Both algorithms fit the measured data well, with the least squares algorithm having a better average goodness of fit (95.2 mOmega m versus 109.8 mOmega m) and a faster execution time (80.9 ms versus 13 637 ms) than the genetic algorithm. The mean parameters, from all tissue samples, estimated using the genetic algorithm ranged from 4.44 to 5.55 Omega m, 2.42 to 7.14 Omega m, 3.26 to 6.07 kHz and 0.565 to 0.654 for rho(infinity), Deltarho, f(c) and alpha, respectively. These same parameters estimated using the least squares algorithm ranged from 4.58 to 5.79 Omega m, 2.18 to 6.98 Omega m, 2.97 to 5.06 kHz and 0.621 to 0.742 for rho(infinity), Deltarho, f(c) and alpha, respectively. The ranges of these parameters were similar to those reported in the literature. Further, significant differences (p < 0.01) were observed between CaP and BPH for the spectral parameters Deltarho and f

  15. Hybridization of partial least squares and neural network models for quantifying lunar surface minerals

    NASA Astrophysics Data System (ADS)

    Li, Shuai; Li, Lin; Milliken, Ralph; Song, Kaishan

    2012-09-01

    The goal of this study is to develop an efficient and accurate model for using visible-near infrared reflectance spectra to estimate the abundance of minerals on the lunar surface. Previous studies using partial least squares (PLS) and genetic algorithm-partial least squares (GA-PLS) models for this purpose revealed several drawbacks. PLS has two limitations: (1) redundant spectral bands cannot be removed effectively and (2) nonlinear spectral mixing (i.e., intimate mixtures) cannot be accommodated. Incorporating GA into the model is an effective way for selecting a set of spectral bands that are the most sensitive to variations in the presence/abundance of lunar minerals and to some extent overcomes the first limitation. Given the fact that GA-PLS is still subject to the effect of nonlinearity, here we develop and test a hybrid partial least squares-back propagation neural network (PLS-BPNN) model to determine the effectiveness of BPNN for overcoming the two limitations simultaneously. BPNN takes nonlinearity into account with sigmoid functions, and the weights of redundant spectral bands are significantly decreased through the back propagation learning process. PLS, GA-PLS and PLS-BPNN are tested with the Lunar Soil Characterization Consortium dataset (LSCC), which includes VIS-NIR reflectance spectra and mineralogy for various soil size fractions and the accuracy of the models are assessed based on R2 and root mean square error values. The PLS-BPNN model is further tested with 12 additional Apollo soil samples. The results indicate that: (1) PLS-BPNN exhibits the best performance compared with PLS and GA-PLS for retrieving abundances of minerals that are dominant on the lunar surface; (2) PLS-BPNN can overcome the two limitations of PLS; (3) PLS-BPNN has the capability to accommodate spectral effects resulting from variations in particle size. By analyzing PLS beta coefficients, spectral bands selected by GA, and the loading curve of the latent variable with the

  16. Least Squares Orbit Determination Using Partials of Mean Elements from Generalized Method of Averaging

    NASA Astrophysics Data System (ADS)

    Setty, Srinivas; Cefola, Paul

    Orbital debris is a well-known challenge of the space age. Maintaining a precise catalogue of space objects’ ephemeris is required to monitor and actively conduct collision avoidance maneuvers of functioning satellites. Maintaining a catalogue of hundreds of thousands of objects is computationally cumbersome. For this purpose, accurate and fast propagators along with similarly fast and accurate orbit determination method to update the catalogue with new tracking data are required. After investigating a semi-analytical satellite theory for cataloguing, we are now presenting an orbit determination system using partial derivatives of mean elements set, which is used in semi-analytical methods. In this study, combining the mean elements of semi-analytical satellite theory with well-established estimation procedures for orbit determination is performed. The selected mean elements are in equinoctial coordinate system, and are averaged for a specific theory - Draper Semi-analytical Satellite Theory (DSST). Forming a state transition matrix for least squares orbit determination from DSST’s mean elements involves the following partial derivatives: 1.the partial derivatives of the equinoctial short-periodic variations with respect to the mean equinoctial elements at the same time (within propagation) 2.the partial derivatives of the equinoctial mean elements at an arbitrary time with respect to the epoch time equinoctial mean elements 3.the partial derivatives of the equinoctial mean elements at an arbitrary time with respect to the dynamical parameters (atmospheric drag coefficient and solar radiation pressure coefficient), and 4.the partial derivatives of the equinoctial short-periodic variations with respect to the dynamical parameters The semi-analytical partial derivatives are composed of averaged partial derivatives and short periodic partial derivatives. Averaged partial derivatives are updated in time using analytical expressions, which includes certain

  17. Comparison of SIRT and SQS for Regularized Weighted Least Squares Image Reconstruction.

    PubMed

    Gregor, Jens; Fessler, Jeffrey A

    2015-03-01

    Tomographic image reconstruction is often formulated as a regularized weighted least squares (RWLS) problem optimized by iterative algorithms that are either inherently algebraic or derived from a statistical point of view. This paper compares a modified version of SIRT (Simultaneous Iterative Reconstruction Technique), which is of the former type, with a version of SQS (Separable Quadratic Surrogates), which is of the latter type. We show that the two algorithms minimize the same criterion function using similar forms of preconditioned gradient descent. We present near-optimal relaxation for both based on eigenvalue bounds and include a heuristic extension for use with ordered subsets. We provide empirical evidence that SIRT and SQS converge at the same rate for all intents and purposes. For context, we compare their performance with an implementation of preconditioned conjugate gradient. The illustrative application is X-ray CT of luggage for aviation security.

  18. A nonlinear quality-related fault detection approach based on modified kernel partial least squares.

    PubMed

    Jiao, Jianfang; Zhao, Ning; Wang, Guang; Yin, Shen

    2017-01-01

    In this paper, a new nonlinear quality-related fault detection method is proposed based on kernel partial least squares (KPLS) model. To deal with the nonlinear characteristics among process variables, the proposed method maps these original variables into feature space in which the linear relationship between kernel matrix and output matrix is realized by means of KPLS. Then the kernel matrix is decomposed into two orthogonal parts by singular value decomposition (SVD) and the statistics for each part are determined appropriately for the purpose of quality-related fault detection. Compared with relevant existing nonlinear approaches, the proposed method has the advantages of simple diagnosis logic and stable performance. A widely used literature example and an industrial process are used for the performance evaluation for the proposed method.

  19. Simultaneous evaluation of interrelated cross sections by generalized least-squares and related data file requirements

    SciTech Connect

    Poenitz, W.P.

    1984-10-25

    Though several cross sections have been designated as standards, they are not basic units and are interrelated by ratio measurements. Moreover, as such interactions as /sup 6/Li + n and /sup 10/B + n involve only two and three cross sections respectively, total cross section data become useful for the evaluation process. The problem can be resolved by a simultaneous evaluation of the available absolute and shape data for cross sections, ratios, sums, and average cross sections by generalized least-squares. A data file is required for such evaluation which contains the originally measured quantities and their uncertainty components. Establishing such a file is a substantial task because data were frequently reported as absolute cross sections where ratios were measured without sufficient information on which reference cross section and which normalization were utilized. Reporting of uncertainties is often missing or incomplete. The requirements for data reporting will be discussed.

  20. On the performance of variable forgetting factor recursive least-squares algorithms

    NASA Astrophysics Data System (ADS)

    Elisei-Iliescu, Camelia; Paleologu, Constantin; Tamaş, Rǎzvan

    2016-12-01

    The recursive least-squares (RLS) is a very popular adaptive algorithm, which is widely used in many system identification problems. The parameter that crucially influences the performance of the RLS algorithm is the forgetting factor. The value of this parameter leads to a compromise between tracking, misadjustment, and stability. In this paper, we present some insights on the performance of variable forgetting factor RLS (VFF-RLS) algorithms, in the context of system identification. Besides the classical RLS algorithm, we mainly focus on two recently proposed VFF-RLS algorithms. The novelty of the experimental setup is that we use real-world signals provided by Romanian Air Traffic Services Administration, i.e., voice and noise signals corresponding to real communication channels. In this context, the Air Traffic Control (ATC) communication represents a challenging task, usually involving non-stationary environments and stability issues.

  1. SOM-based nonlinear least squares twin SVM via active contours for noisy image segmentation

    NASA Astrophysics Data System (ADS)

    Xie, Xiaomin; Wang, Tingting

    2017-02-01

    In this paper, a nonlinear least square twin support vector machine (NLSTSVM) with the integration of active contour model (ACM) is proposed for noisy image segmentation. Efforts have been made to seek the kernel-generated surfaces instead of hyper-planes for the pixels belonging to the foreground and background, respectively, using the kernel trick to enhance the performance. The concurrent self organizing maps (SOMs) are applied to approximate the intensity distributions in a supervised way, so as to establish the original training sets for the NLSTSVM. Further, the two sets are updated by adding the global region average intensities at each iteration. Moreover, a local variable regional term rather than edge stop function is adopted in the energy function to ameliorate the noise robustness. Experiment results demonstrate that our model holds the higher segmentation accuracy and more noise robustness.

  2. Phase aberration compensation of digital holographic microscopy based on least squares surface fitting

    NASA Astrophysics Data System (ADS)

    Di, Jianglei; Zhao, Jianlin; Sun, Weiwei; Jiang, Hongzhen; Yan, Xiaobo

    2009-10-01

    Digital holographic microscopy allows the numerical reconstruction of the complex wavefront of samples, especially biological samples such as living cells. In digital holographic microscopy, a microscope objective is introduced to improve the transverse resolution of the sample; however a phase aberration in the object wavefront is also brought along, which will affect the phase distribution of the reconstructed image. We propose here a numerical method to compensate for the phase aberration of thin transparent objects with a single hologram. The least squares surface fitting with points number less than the matrix of the original hologram is performed on the unwrapped phase distribution to remove the unwanted wavefront curvature. The proposed method is demonstrated with the samples of the cicada wings and epidermal cells of garlic, and the experimental results are consistent with that of the double exposure method.

  3. Obtaining the wavefront phase maps of free form surfaces: using the least squares algorithm

    NASA Astrophysics Data System (ADS)

    Villalobos-Mendoza, B.; Aguirre-Aguirre, D.; Granados-Agustín, F.; Cornejo-Rodríguez, A.

    2015-01-01

    In this work is presented the validation of the least squares algorithm proposed by Morgan (1982) and Greivenkamp (1984), to obtain the wavefront phase maps of a free form surface. The validation was made by simulating the synthetic interferograms of a free form surface using a Bessel function, each interferogram was simulated with a phase-shifting of π/20. This algorithm is applied to the experimental interferograms that are obtained in a Twyman-Green interferometer where the phase shifting is achieved by using a SLM (Spatial Light Modulator) that is placed in one of its arms; the phase shifts are achieved by displaying all the gray levels from 0 to 255 in the SLM. The phase shifts that are performed in this experimental setup are lower than π/4, therefore the conventional algorithms cannot be applied.

  4. Least squares parameter estimation methods for material decomposition with energy discriminating detectors

    SciTech Connect

    Le, Huy Q.; Molloi, Sabee

    2011-01-15

    Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar to the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg/ml) and iodine (4, 12, 20, 28, 36, and 44 mg/ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30/70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg/ml) and iodine (5, 15, 25, 35, and 45 mg/ml). The x-ray transport process was simulated where the Beer-Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues

  5. Online soft sensor of humidity in PEM fuel cell based on dynamic partial least squares.

    PubMed

    Long, Rong; Chen, Qihong; Zhang, Liyan; Ma, Longhua; Quan, Shuhai

    2013-01-01

    Online monitoring humidity in the proton exchange membrane (PEM) fuel cell is an important issue in maintaining proper membrane humidity. The cost and size of existing sensors for monitoring humidity are prohibitive for online measurements. Online prediction of humidity using readily available measured data would be beneficial to water management. In this paper, a novel soft sensor method based on dynamic partial least squares (DPLS) regression is proposed and applied to humidity prediction in PEM fuel cell. In order to obtain data of humidity and test the feasibility of the proposed DPLS-based soft sensor a hardware-in-the-loop (HIL) test system is constructed. The time lag of the DPLS-based soft sensor is selected as 30 by comparing the root-mean-square error in different time lag. The performance of the proposed DPLS-based soft sensor is demonstrated by experimental results.

  6. The least-squares finite element method for low-mach-number compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Yu, Sheng-Tao

    1994-01-01

    The present paper reports the development of the Least-Squares Finite Element Method (LSFEM) for simulating compressible viscous flows at low Mach numbers in which the incompressible flows pose as an extreme. Conventional approach requires special treatments for low-speed flows calculations: finite difference and finite volume methods are based on the use of the staggered grid or the preconditioning technique; and, finite element methods rely on the mixed method and the operator-splitting method. In this paper, however, we show that such difficulty does not exist for the LSFEM and no special treatment is needed. The LSFEM always leads to a symmetric, positive-definite matrix through which the compressible flow equations can be effectively solved. Two numerical examples are included to demonstrate the method: first, driven cavity flows at various Reynolds numbers; and, buoyancy-driven flows with significant density variation. Both examples are calculated by using full compressible flow equations.

  7. Note: Multivariate system spectroscopic model using Lorentz oscillators and partial least squares regression analysis

    NASA Astrophysics Data System (ADS)

    Gad, R. S.; Parab, J. S.; Naik, G. M.

    2010-11-01

    Multivariate system spectroscopic model plays important role in understanding chemometrics of ensemble under study. Here in this manuscript we discuss various approaches of modeling of spectroscopic system and demonstrate how Lorentz oscillator can be used to model any general spectroscopic system. Chemometric studies require customized templates design for the corresponding variants participating in ensemble, which generates the characteristic matrix of the ensemble under study. The typical biological system that resembles human blood tissue consisting of five major constituents i.e., alanine, urea, lactate, glucose, ascorbate; has been tested on the model. The model was validated using three approaches, namely, root mean square error (RMSE) analysis in the range of ±5% confidence interval, clerk gird error plot, and RMSE versus percent noise level study. Also the model was tested across various template sizes (consisting of samples ranging from 10 up to 1000) to ascertain the validity of partial least squares regression. The model has potential in understanding the chemometrics of proteomics pathways.

  8. Noninvasive glucometer model using partial least square regression technique for human blood matrix

    NASA Astrophysics Data System (ADS)

    Parab, J. S.; Gad, R. S.; Naik, G. M.

    2010-05-01

    In this article, we have highlighted the partial least square regression (PLSR) model to predict the glucose level in human blood by considering only five variants. The PLSR model is experimentally validated for the 13 templates samples. The root mean square error analysis of design model and experimental sample is found to be satisfactory with the values of 3.459 and 5.543, respectively. In PLSR templates design is a critical issue for the number of variants participating in the model. Ensemble consisting of five major variants is simulated to replicate the signatures of these constituents in the human blood, i.e., alanine, urea, lactate, glucose, and ascorbate. Multivariate system using PLSR plays important role in understanding chemometrics of such ensemble. The resultant spectra of all these constituents are generated to create templates for the PLSR model. This model has potential scope in designing a near-infrared spectroscopy based noninvasive glucometer.

  9. Statistical behavior of joint least-square estimation in the phase diversity context.

    PubMed

    Idier, Jérôme; Mugnier, Laurent; Blanc, Amandine

    2005-12-01

    The images recorded by optical telescopes are often degraded by aberrations that induce phase variations in the pupil plane. Several wavefront sensing techniques have been proposed to estimate aberrated phases. One of them is phase diversity, for which the joint least-square approach introduced by Gonsalves et al. is a reference method to estimate phase coefficients from the recorded images. In this paper, we rely on the asymptotic theory of Toeplitz matrices to show that Gonsalves' technique provides a consistent phase estimator as the size of the images grows. No comparable result is yielded by the classical joint maximum likelihood interpretation (e.g., as found in the work by Paxman et al.). Finally, our theoretical analysis is illustrated through simulated problems.

  10. Nonlinear Least-Squares Time-Difference Estimation from Sub-Nyquist-Rate Samples

    NASA Astrophysics Data System (ADS)

    Harada, Koji; Sakai, Hideaki

    In this paper, time-difference estimation of filtered random signals passed through multipath channels is discussed. First, we reformulate the approach based on innovation-rate sampling (IRS) to fit our random signal model, then use the IRS results to drive the nonlinear least-squares (NLS) minimization algorithm. This hybrid approach (referred to as the IRS-NLS method) provides consistent estimates even for cases with sub-Nyquist sampling assuming the use of compactly-supported sampling kernels that satisfies the recently-developed nonaliasing condition in the frequency domain. Numerical simulations show that the proposed NLS-IRS method can improve performance over the straight-forward IRS method, and provides approximately the same performance as the NLS method with reduced sampling rate, even for closely-spaced time delays. This enables, given a fixed observation time, significant reduction in the required number of samples, while maintaining the same level of estimation performance.

  11. Optimization of Parametric Constants for Creep-Rupture Data by Means of Least Squares

    NASA Technical Reports Server (NTRS)

    Manson, S. S.; Mendelson, A.

    1959-01-01

    An objective method utilizing least squares is presented for the determination of the optimum parametric constants for stress-rupture data. The method is applied to both isostress and isothermal data for the parameters proposed by Larson and Miller, Manson and Haferd, and by Dorn. Several examples are treated in detail, and it was found that the method gives good results. It is shown that the values of the constants for the parameter proposed by Manson and Haferd are not critical as long as Ta and log ta appear in the proper combination. In addition to optimization, the chief utility of the method lies in the fact that it gives the same results for a given set of data no matter who makes the analysis, which is not the case for the graphical methods presently employed.

  12. Multirate time-stepping least squares shadowing method for unsteady turbulent flow

    NASA Astrophysics Data System (ADS)

    Bae, Hyunji Jane; Moin, Parviz

    2014-11-01

    The recently developed least squares shadowing (LSS) method reformulates unsteady turbulent flow simulations to be well-conditioned time domain boundary value problems. The reformulation can enable scalable parallel-in-time simulation of turbulent flows (Wang et al. Phys. Fluid [2013]). A LSS method with multirate time-stepping was implemented to avoid the necessity of taking small global time-steps (restricted by the largest value of the Courant number on the grid) and therefore result in a more efficient algorithm. We will present the results of the multirate time-stepping LSS compared to a single rate time-stepping LSS and discuss the computational savings. Hyunji Jane Bae acknowledges support from the Stanford Graduate Fellowship.

  13. Least squares support vector machine for short-term prediction of meteorological time series

    NASA Astrophysics Data System (ADS)

    Mellit, A.; Pavan, A. Massi; Benghanem, M.

    2013-01-01

    The prediction of meteorological time series plays very important role in several fields. In this paper, an application of least squares support vector machine (LS-SVM) for short-term prediction of meteorological time series (e.g. solar irradiation, air temperature, relative humidity, wind speed, wind direction and pressure) is presented. In order to check the generalization capability of the LS-SVM approach, a K-fold cross-validation and Kolmogorov-Smirnov test have been carried out. A comparison between LS-SVM and different artificial neural network (ANN) architectures (recurrent neural network, multi-layered perceptron, radial basis function and probabilistic neural network) is presented and discussed. The comparison showed that the LS-SVM produced significantly better results than ANN architectures. It also indicates that LS-SVM provides promising results for short-term prediction of meteorological data.

  14. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  15. GaussFit - A system for least squares and robust estimation

    NASA Technical Reports Server (NTRS)

    Jefferys, W. H.; Fitzpatrick, M. J.; Mcarthur, B. E.

    1988-01-01

    GaussFit is a new computer program for solving least-squares and robust estimation problems. It has a number of unique features, including a complete programming language designed especially to formulate estimation problems, a built-in compiler and interpreter to support the programming language, and a built-in algebraic manipulator for calculating the required partial derivatives analytically. These features make GaussFit very easy to use, so that even complex problems can be set up and solved with minimal effort. GaussFit can correctly handle many cases of practical interest: nonlinear models, exact constraints, correlated observations, and models where the equations of condition contain more than one observed quantity. An experimental robust estimation capability is built into GaussFit so that data sets contaminated by outliers can be handled simply and efficiently.

  16. Multisource least-squares migration and prism wave reverse time migration

    NASA Astrophysics Data System (ADS)

    Dai, Wei

    Least-squares migration has been shown to be able to produce high quality migration images, but its computational cost is considered to be too high for practical imaging. In this dissertation, a multisource least-squares migration algorithm (MLSM) is proposed to increase the computational efficiency by utilizing the blended sources processing technique. The MLSM algorithm is implemented with both the Kirchhoff migration and reverse time migration methods. In the last chapter, a new method is proposed to migrate prism waves separately to illuminate vertical reflectors such as salt flanks. Its advantage over standard RTM method is that it does not require modifying the migration velocity model. There are three main chapters in this dissertation. In Chapter 2, the MLSM algorithm is implemented with Kirchhoff migration and random time-shift encoding functions. Numerical results with Kirchhoff least-squares migration on the 2D SEG/EAGE salt model show that an accurate image is obtained by migrating a supergather of 320 phase-encoded shots. When the encoding functions are the same for every iteration, the I/O cost of MLSM is reduced by 320 times. Empirical results show that the crosstalk noise introduced by blended sources is more effectively reduced when the encoding functions are changed at every iteration. The analysis of the signal-to-noise ratio (SNR) suggests that an acceptable number of iterations are needed to enhance the SNR to an acceptable level. The benefit is that Kirchhoff MLSM is a few times faster than standard LSM, and produces much more resolved images than standard Kirchhoff migration. In Chapter 3, the MLSM algorithm is implemented with the reverse time migration method and a new parameterization, where the migration image of each shot gather is updated separately and an ensemble of prestack images is produced along with common image gathers. The merits of prestack plane-wave LSRTM are the following: (1) plane-wave prestack LSRTM can sometimes offer

  17. The Least Squares Stochastic Finite Element Method in Structural Stability Analysis of Steel Skeletal Structures

    NASA Astrophysics Data System (ADS)

    Kamiński, M.; Szafran, J.

    2015-05-01

    The main purpose of this work is to verify the influence of the weighting procedure in the Least Squares Method on the probabilistic moments resulting from the stability analysis of steel skeletal structures. We discuss this issue also in the context of the geometrical nonlinearity appearing in the Stochastic Finite Element Method equations for the stability analysis and preservation of the Gaussian probability density function employed to model the Young modulus of a structural steel in this problem. The weighting procedure itself (with both triangular and Dirac-type) shows rather marginal influence on all probabilistic coefficients under consideration. This hybrid stochastic computational technique consisting of the FEM and computer algebra systems (ROBOT and MAPLE packages) may be used for analogous nonlinear analyses in structural reliability assessment.

  18. Multivariate analysis of remote LIBS spectra using partial least squares, principal component analysis, and related techniques

    SciTech Connect

    Clegg, Samuel M; Barefield, James E; Wiens, Roger C; Sklute, Elizabeth; Dyare, Melinda D

    2008-01-01

    Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.

  19. Online Soft Sensor of Humidity in PEM Fuel Cell Based on Dynamic Partial Least Squares

    PubMed Central

    Long, Rong; Chen, Qihong; Zhang, Liyan; Ma, Longhua; Quan, Shuhai

    2013-01-01

    Online monitoring humidity in the proton exchange membrane (PEM) fuel cell is an important issue in maintaining proper membrane humidity. The cost and size of existing sensors for monitoring humidity are prohibitive for online measurements. Online prediction of humidity using readily available measured data would be beneficial to water management. In this paper, a novel soft sensor method based on dynamic partial least squares (DPLS) regression is proposed and applied to humidity prediction in PEM fuel cell. In order to obtain data of humidity and test the feasibility of the proposed DPLS-based soft sensor a hardware-in-the-loop (HIL) test system is constructed. The time lag of the DPLS-based soft sensor is selected as 30 by comparing the root-mean-square error in different time lag. The performance of the proposed DPLS-based soft sensor is demonstrated by experimental results. PMID:24453923

  20. Prediction of biochar yield from cattle manure pyrolysis via least squares support vector machine intelligent approach.

    PubMed

    Cao, Hongliang; Xin, Ya; Yuan, Qiaoxia

    2016-02-01

    To predict conveniently the biochar yield from cattle manure pyrolysis, intelligent modeling approach was introduced in this research. A traditional artificial neural networks (ANN) model and a novel least squares support vector machine (LS-SVM) model were developed. For the identification and prediction evaluation of the models, a data set with 33 experimental data was used, which were obtained using a laboratory-scale fixed bed reaction system. The results demonstrated that the intelligent modeling approach is greatly convenient and effective for the prediction of the biochar yield. In particular, the novel LS-SVM model has a more satisfying predicting performance and its robustness is better than the traditional ANN model. The introduction and application of the LS-SVM modeling method gives a successful example, which is a good reference for the modeling study of cattle manure pyrolysis process, even other similar processes.

  1. Adaptive control of a flexible beam using least square lattice filters

    NASA Technical Reports Server (NTRS)

    Sundararajan, N.; Montgomery, R. C.

    1983-01-01

    This paper presents an indirect adaptive control scheme for the control of flexible structures using recursive least square lattice filters. The identification scheme uses lattice filters which provide an on-line estimate of the number of modes, mode shapes and modal amplitudes. These modes are coupled and a transformation to decouple them in order to obtain the natural modes is presented. The decoupled modal amplitude time series are then used in an equation error identification scheme to identify the model parameters in an autoregressive moving average (ARMA) form. The control is based on modal pole placement scheme with the objective of vibration suppression. The control gains are calculated based on the identified ARMA parameters. Before using the identified parameters for control, detailed testing and validation procedures are carried out on the identified parameters. The full adaptive control scheme is demonstrated using the simulation for the 12 foot free-free beam apparatus at NASA Langley Research Center.

  2. Michaelis-Menten kinetics, the operator-repressor system, and least squares approaches.

    PubMed

    Hadeler, Karl Peter

    2013-01-01

    The Michaelis-Menten (MM) function is a fractional linear function depending on two positive parameters. These can be estimated by nonlinear or linear least squares methods. The non-linear methods, based directly on the defect of the MM function, can fail and not produce any minimizer. The linear methods always produce a unique minimizer which, however, may not be positive. Here we give sufficient conditions on the data such that the nonlinear problem has at least one positive minimizer and also conditions for the minimizer of the linear problem to be positive. We discuss in detail the models and equilibrium relations of a classical operator-repressor system, and we extend our approach to the MM problem with leakage and to reversible MM kinetics. The arrangement of the sufficient conditions exhibits the important role of data that have a concavity property (chemically feasible data).

  3. Error Estimates Derived from the Data for Least-Squares Spline Fitting

    SciTech Connect

    Jerome Blair

    2007-06-25

    The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

  4. Generalized total least squares to characterize biogeochemical processes of the ocean

    NASA Astrophysics Data System (ADS)

    Guglielmi, Véronique; Goyet, Catherine; Touratier, Franck; El Jai, Marie

    2017-01-01

    The chemical composition of the global ocean is governed by biological, chemical, and physical processes. These processes interact with each other so that the concentrations of carbon, oxygen, nitrogen (mainly from nitrate, nitrite, ammonium), and phosphorous (mainly from phosphate), vary in constant proportions, referred to as the Redfield ratios. We construct here the generalized total least squares estimator of these ratios. The significance of our approach is twofold; it respects the hydrological characteristics of the studied areas, and it can be applied identically in any area where enough data are available. The tests applied to Atlantic Ocean data highlight a variability of the Redfield ratios, both with geographical location and with depth. This variability emphasizes the importance of local and accurate estimates of Redfield ratios.

  5. Spline based least squares integration for two-dimensional shape or wavefront reconstruction

    NASA Astrophysics Data System (ADS)

    Huang, Lei; Xue, Junpeng; Gao, Bo; Zuo, Chao; Idir, Mourad

    2017-04-01

    In this work, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. The noise influence is studied by adding white Gaussian noise to the slope data. Experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.

  6. Slip distribution of the 2010 Mentawai earthquake from GPS observation using least squares inversion method

    NASA Astrophysics Data System (ADS)

    Awaluddin, Moehammad; Yuwono, Bambang Darmo; Puspita, Yolanda Adya

    2016-05-01

    Continuous Global Positioning System (GPS) observations showed significant crustal displacements as a result of the 2010 Mentawai earthquake. The Least Square Inversion method of Mentawai earthquake slip distribution from SuGAR observations yielded in an optimum value of slip distribution by giving a weight of smoothing constraint and a weight of slip value constraint = 0 at the edge of the earthquake rupture area. A maximum coseismic slip of the inversion calculation was 1.997 m and concentrated around stations PRKB (Pagai Island). In addition, the values of dip-slip direction tend to be more dominant. The seismic moment calculated from the slip distribution was 6.89 × 10E+20 Nm, which is equivalent to a magnitude of 7.8.

  7. Quantification of brain lipids by FTIR spectroscopy and partial least squares regression

    NASA Astrophysics Data System (ADS)

    Dreissig, Isabell; Machill, Susanne; Salzer, Reiner; Krafft, Christoph

    2009-01-01

    Brain tissue is characterized by high lipid content. Its content decreases and the lipid composition changes during transformation from normal brain tissue to tumors. Therefore, the analysis of brain lipids might complement the existing diagnostic tools to determine the tumor type and tumor grade. Objective of this work is to extract lipids from gray matter and white matter of porcine brain tissue, record infrared (IR) spectra of these extracts and develop a quantification model for the main lipids based on partial least squares (PLS) regression. IR spectra of the pure lipids cholesterol, cholesterol ester, phosphatidic acid, phosphatidylcholine, phosphatidylethanolamine, phosphatidylserine, phosphatidylinositol, sphingomyelin, galactocerebroside and sulfatide were used as references. Two lipid mixtures were prepared for training and validation of the quantification model. The composition of lipid extracts that were predicted by the PLS regression of IR spectra was compared with lipid quantification by thin layer chromatography.

  8. Phase-unwrapping algorithm by a rounding-least-squares approach

    NASA Astrophysics Data System (ADS)

    Juarez-Salazar, Rigoberto; Robledo-Sanchez, Carlos; Guerrero-Sanchez, Fermin

    2014-02-01

    A simple and efficient phase-unwrapping algorithm based on a rounding procedure and a global least-squares minimization is proposed. Instead of processing the gradient of the wrapped phase, this algorithm operates over the gradient of the phase jumps by a robust and noniterative scheme. Thus, the residue-spreading and over-smoothing effects are reduced. The algorithm's performance is compared with four well-known phase-unwrapping methods: minimum cost network flow (MCNF), fast Fourier transform (FFT), quality-guided, and branch-cut. A computer simulation and experimental results show that the proposed algorithm reaches a high-accuracy level than the MCNF method by a low-computing time similar to the FFT phase-unwrapping method. Moreover, since the proposed algorithm is simple, fast, and user-free, it could be used in metrological interferometric and fringe-projection automatic real-time applications.

  9. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    NASA Technical Reports Server (NTRS)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  10. River Flow Forecasting: a Hybrid Model of Self Organizing Maps and Least Square Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Ismail, S.; Samsudin, R.; Shabri, A.

    2010-10-01

    Successful river flow time series forecasting is a major goal and an essential procedure that is necessary in water resources planning and management. This study introduced a new hybrid model based on a combination of two familiar non-linear method of mathematical modeling: Self Organizing Map (SOM) and Least Square Support Vector Machine (LSSVM) model referred as SOM-LSSVM model. The hybrid model uses the SOM algorithm to cluster the training data into several disjointed clusters and the individual LSSVM is used to forecast the river flow. The feasibility of this proposed model is evaluated to actual river flow data from Bernam River located in Selangor, Malaysia. Their results have been compared to those obtained using LSSVM and artificial neural networks (ANN) models. The experiment results show that the SOM-LSSVM model outperforms other models for forecasting river flow. It also indicates that the proposed model can forecast more precisely and provides a promising alternative technique in river flow forecasting.

  11. A hybrid least squares support vector machines and GMDH approach for river flow forecasting

    NASA Astrophysics Data System (ADS)

    Samsudin, R.; Saad, P.; Shabri, A.

    2010-06-01

    This paper proposes a novel hybrid forecasting model, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM), known as GLSSVM. The GMDH is used to determine the useful input variables for LSSVM model and the LSSVM model which works as time series forecasting. In this study the application of GLSSVM for monthly river flow forecasting of Selangor and Bernam River are investigated. The results of the proposed GLSSVM approach are compared with the conventional artificial neural network (ANN) models, Autoregressive Integrated Moving Average (ARIMA) model, GMDH and LSSVM models using the long term observations of monthly river flow discharge. The standard statistical, the root mean square error (RMSE) and coefficient of correlation (R) are employed to evaluate the performance of various models developed. Experiment result indicates that the hybrid model was powerful tools to model discharge time series and can be applied successfully in complex hydrological modeling.

  12. Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model

    NASA Astrophysics Data System (ADS)

    Yu, Lean; Wang, Shouyang; Lai, K. K.

    Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.

  13. Motion correction of magnetic resonance imaging data by using adaptive moving least squares method.

    PubMed

    Nam, Haewon; Lee, Yeon Ju; Jeong, Byeongseon; Park, Hae-Jeong; Yoon, Jungho

    2015-06-01

    Image artifacts caused by subject motion during the imaging sequence are one of the most common problems in magnetic resonance imaging (MRI) and often degrade the image quality. In this study, we develop a motion correction algorithm for the interleaved-MR acquisition. An advantage of the proposed method is that it does not require either additional equipment or redundant over-sampling. The general framework of this study is similar to that of Rohlfing et al. [1], except for the introduction of the following fundamental modification. The three-dimensional (3-D) scattered data approximation method is used to correct the artifacted data as a post-processing step. In order to obtain a better match to the local structures of the given image, we use the data-adapted moving least squares (MLS) method that can improve the performance of the classical method. Numerical results are provided to demonstrate the advantages of the proposed algorithm.

  14. Distributed weighted least-squares estimation with fast convergence for large-scale systems.

    PubMed

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.

  15. The Recovery of Weak Impulsive Signals Based on Stochastic Resonance and Moving Least Squares Fitting

    PubMed Central

    Jiang, Kuosheng.; Xu, Guanghua.; Liang, Lin.; Tao, Tangfei.; Gu, Fengshou.

    2014-01-01

    In this paper a stochastic resonance (SR)-based method for recovering weak impulsive signals is developed for quantitative diagnosis of faults in rotating machinery. It was shown in theory that weak impulsive signals follow the mechanism of SR, but the SR produces a nonlinear distortion of the shape of the impulsive signal. To eliminate the distortion a moving least squares fitting method is introduced to reconstruct the signal from the output of the SR process. This proposed method is verified by comparing its detection results with that of a morphological filter based on both simulated and experimental signals. The experimental results show that the background noise is suppressed effectively and the key features of impulsive signals are reconstructed with a good degree of accuracy, which leads to an accurate diagnosis of faults in roller bearings in a run-to failure test. PMID:25076220

  16. Probabilistic partial least squares regression for quantitative analysis of Raman spectra.

    PubMed

    Li, Shuo; Nyagilo, James O; Dave, Digant P; Wang, Wei; Zhang, Baoju; Gao, Jean

    2015-01-01

    With the latest development of Surface-Enhanced Raman Scattering (SERS) technique, quantitative analysis of Raman spectra has shown the potential and promising trend of development in vivo molecular imaging. Partial Least Squares Regression (PLSR) is state-of-the-art method. But it only relies on training samples, which makes it difficult to incorporate complex domain knowledge. Based on probabilistic Principal Component Analysis (PCA) and probabilistic curve fitting idea, we propose a probabilistic PLSR (PPLSR) model and an Estimation Maximisation (EM) algorithm for estimating parameters. This model explains PLSR from a probabilistic viewpoint, describes its essential meaning and provides a foundation to develop future Bayesian nonparametrics models. Two real Raman spectra datasets were used to evaluate this model, and experimental results show its effectiveness.

  17. Regularized least-squares migration of simultaneous-source seismic data with adaptive singular spectrum analysis.

    PubMed

    Li, Chuang; Huang, Jian-Ping; Li, Zhen-Chun; Wang, Rong-Rong

    2017-01-01

    Simultaneous-source acquisition has been recognized as an economic and efficient acquisition method, but the direct imaging of the simultaneous-source data produces migration artifacts because of the interference of adjacent sources. To overcome this problem, we propose the regularized least-squares reverse time migration method (RLSRTM) using the singular spectrum analysis technique that imposes sparseness constraints on the inverted model. Additionally, the difference spectrum theory of singular values is presented so that RLSRTM can be implemented adaptively to eliminate the migration artifacts. With numerical tests on a flat layer model and a Marmousi model, we validate the superior imaging quality, efficiency and convergence of RLSRTM compared with LSRTM when dealing with simultaneous-source data, incomplete data and noisy data.

  18. The Least-Squares Calibration on the Micro-Arcsecond Metrology Test Bed

    NASA Technical Reports Server (NTRS)

    Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.

    2006-01-01

    The Space Interferometry Mission (S1M) will measure optical path differences (OPDs) with an accuracy of tens of picometers, requiring precise calibration of the instrument. In this article, we present a calibration approach based on fitting star light interference fringes in the interferometer using a least-squares algorithm. The algorithm is first analyzed for the case of a monochromatic light source with a monochromatic fringe model. Using fringe data measured on the Micro-Arcsecond Metrology (MAM) testbed with a laser source, the error in the determination of the wavelength is shown to be less than 10pm. By using a quasi-monochromatic fringe model, the algorithm can be extended to the case of a white light source with a narrow detection bandwidth. In SIM, because of the finite bandwidth of each CCD pixel, the effect of the fringe envelope can not be neglected, especially for the larger optical path difference range favored for the wavelength calibration.

  19. A fast iterative recursive least squares algorithm for Wiener model identification of highly nonlinear systems.

    PubMed

    Kazemi, Mahdi; Arefi, Mohammad Mehdi

    2017-03-01

    In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used.

  20. A least-squares finite element method for 3D incompressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, T. L.; Hou, Lin-Jun; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations, and results in symmetric, positive definite algebraic system. An additional compatibility equation, i.e., the divergence of vorticity vector should be zero, is included to make the first-order system elliptic. The Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. The flow in a half of 3D cubic cavity is calculated at Re = 100, 400, and 1,000 with 50 x 52 x 25 trilinear elements. The Taylor-Gortler-like vortices are observed at Re = 1,000.

  1. First-Order System Least Squares for the Stokes Equations, with Application to Linear Elasticity

    NASA Technical Reports Server (NTRS)

    Cai, Z.; Manteuffel, T. A.; McCormick, S. F.

    1996-01-01

    Following our earlier work on general second-order scalar equations, here we develop a least-squares functional for the two- and three-dimensional Stokes equations, generalized slightly by allowing a pressure term in the continuity equation. By introducing a velocity flux variable and associated curl and trace equations, we are able to establish ellipticity in an H(exp 1) product norm appropriately weighted by the Reynolds number. This immediately yields optimal discretization error estimates for finite element spaces in this norm and optimal algebraic convergence estimates for multiplicative and additive multigrid methods applied to the resulting discrete systems. Both estimates are uniform in the Reynolds number. Moreover, our pressure-perturbed form of the generalized Stokes equations allows us to develop an analogous result for the Dirichlet problem for linear elasticity with estimates that are uniform in the Lame constants.

  2. Comparison of SIRT and SQS for Regularized Weighted Least Squares Image Reconstruction

    PubMed Central

    Gregor, Jens; Fessler, Jeffrey A.

    2015-01-01

    Tomographic image reconstruction is often formulated as a regularized weighted least squares (RWLS) problem optimized by iterative algorithms that are either inherently algebraic or derived from a statistical point of view. This paper compares a modified version of SIRT (Simultaneous Iterative Reconstruction Technique), which is of the former type, with a version of SQS (Separable Quadratic Surrogates), which is of the latter type. We show that the two algorithms minimize the same criterion function using similar forms of preconditioned gradient descent. We present near-optimal relaxation for both based on eigenvalue bounds and include a heuristic extension for use with ordered subsets. We provide empirical evidence that SIRT and SQS converge at the same rate for all intents and purposes. For context, we compare their performance with an implementation of preconditioned conjugate gradient. The illustrative application is X-ray CT of luggage for aviation security. PMID:26478906

  3. Discontinuous Galerkin solution of the Navier-Stokes equations on deformable domains

    SciTech Connect

    Persson, P.-O.; Bonet, J.; Peraire, J.

    2009-01-13

    We describe a method for computing time-dependent solutions to the compressible Navier-Stokes equations on variable geometries. We introduce a continuous mapping between a fixed reference configuration and the time varying domain, By writing the Navier-Stokes equations as a conservation law for the independent variables in the reference configuration, the complexity introduced by variable geometry is reduced to solving a transformed conservation law in a fixed reference configuration, The spatial discretization is carried out using the Discontinuous Galerkin method on unstructured meshes of triangles, while the time integration is performed using an explicit Runge-Kutta method, For general domain changes, the standard scheme fails to preserve exactly the free-stream solution which leads to some accuracy degradation, especially for low order approximations. This situation is remedied by adding an additional equation for the time evolution of the transformation Jacobian to the original conservation law and correcting for the accumulated metric integration errors. A number of results are shown to illustrate the flexibility of the approach to handle high order approximations on complex geometries.

  4. On recursive least-squares filtering algorithms and implementations. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hsieh, Shih-Fu

    1990-01-01

    In many real-time signal processing applications, fast and numerically stable algorithms for solving least-squares problems are necessary and important. In particular, under non-stationary conditions, these algorithms must be able to adapt themselves to reflect the changes in the system and take appropriate adjustments to achieve optimum performances. Among existing algorithms, the QR-decomposition (QRD)-based recursive least-squares (RLS) methods have been shown to be useful and effective for adaptive signal processing. In order to increase the speed of processing and achieve high throughput rate, many algorithms are being vectorized and/or pipelined to facilitate high degrees of parallelism. A time-recursive formulation of RLS filtering employing block QRD will be considered first. Several methods, including a new non-continuous windowing scheme based on selectively rejecting contaminated data, were investigated for adaptive processing. Based on systolic triarrays, many other forms of systolic arrays are shown to be capable of implementing different algorithms. Various updating and downdating systolic algorithms and architectures for RLS filtering are examined and compared in details, which include Householder reflector, Gram-Schmidt procedure, and Givens rotation. A unified approach encompassing existing square-root-free algorithms is also proposed. For the sinusoidal spectrum estimation problem, a judicious method of separating the noise from the signal is of great interest. Various truncated QR methods are proposed for this purpose and compared to the truncated SVD method. Computer simulations provided for detailed comparisons show the effectiveness of these methods. This thesis deals with fundamental issues of numerical stability, computational efficiency, adaptivity, and VLSI implementation for the RLS filtering problems. In all, various new and modified algorithms and architectures are proposed and analyzed; the significance of any of the new method depends

  5. The use of least squares methods in functional optimization of energy use prediction models

    NASA Astrophysics Data System (ADS)

    Bourisli, Raed I.; Al-Shammeri, Basma S.; AlAnzi, Adnan A.

    2012-06-01

    The least squares method (LSM) is used to optimize the coefficients of a closed-form correlation that predicts the annual energy use of buildings based on key envelope design and thermal parameters. Specifically, annual energy use is related to a number parameters like the overall heat transfer coefficients of the wall, roof and glazing, glazing percentage, and building surface area. The building used as a case study is a previously energy-audited mosque in a suburb of Kuwait City, Kuwait. Energy audit results are used to fine-tune the base case mosque model in the VisualDOE{trade mark, serif} software. Subsequently, 1625 different cases of mosques with varying parameters were developed and simulated in order to provide the training data sets for the LSM optimizer. Coefficients of the proposed correlation are then optimized using multivariate least squares analysis. The objective is to minimize the difference between the correlation-predicted results and the VisualDOE-simulation results. It was found that the resulting correlation is able to come up with coefficients for the proposed correlation that reduce the difference between the simulated and predicted results to about 0.81%. In terms of the effects of the various parameters, the newly-defined weighted surface area parameter was found to have the greatest effect on the normalized annual energy use. Insulating the roofs and walls also had a major effect on the building energy use. The proposed correlation and methodology can be used during preliminary design stages to inexpensively assess the impacts of various design variables on the expected energy use. On the other hand, the method can also be used by municipality officials and planners as a tool for recommending energy conservation measures and fine-tuning energy codes.

  6. Discrete variable representation in electronic structure theory: Quadrature grids for least-squares tensor hypercontraction

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David

    2013-05-01

    We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes.

  7. Geometry of nonlinear least squares with applications to sloppy models and optimization

    NASA Astrophysics Data System (ADS)

    Transtrum, Mark K.; Machta, Benjamin B.; Sethna, James P.

    2011-03-01

    Parameter estimation by nonlinear least-squares minimization is a common problem that has an elegant geometric interpretation: the possible parameter values of a model induce a manifold within the space of data predictions. The minimization problem is then to find the point on the manifold closest to the experimental data. We show that the model manifolds of a large class of models, known as sloppy models, have many universal features; they are characterized by a geometric series of widths, extrinsic curvatures, and parameter-effect curvatures, which we describe as a hyper-ribbon. A number of common difficulties in optimizing least-squares problems are due to this common geometric structure. First, algorithms tend to run into the boundaries of the model manifold, causing parameters to diverge or become unphysical before they have been optimized. We introduce the model graph as an extension of the model manifold to remedy this problem. We argue that appropriate priors can remove the boundaries and further improve the convergence rates. We show that typical fits will have many evaporated parameters unless the data are very accurately known. Second, “bare” model parameters are usually ill-suited to describing model behavior; cost contours in parameter space tend to form hierarchies of plateaus and long narrow canyons. Geometrically, we understand this inconvenient parametrization as an extremely skewed coordinate basis and show that it induces a large parameter-effect curvature on the manifold. By constructing alternative coordinates based on geodesic motion, we show that these long narrow canyons are transformed in many cases into a single quadratic, isotropic basin. We interpret the modified Gauss-Newton and Levenberg-Marquardt fitting algorithms as an Euler approximation to geodesic motion in these natural coordinates on the model manifold and the model graph, respectively. By adding a geodesic acceleration adjustment to these algorithms, we alleviate the

  8. Iterative weighting of multiblock data in the orthogonal partial least squares framework.

    PubMed

    Boccard, Julien; Rutledge, Douglas N

    2014-02-27

    The integration of multiple data sources has emerged as a pivotal aspect to assess complex systems comprehensively. This new paradigm requires the ability to separate common and redundant from specific and complementary information during the joint analysis of several data blocks. However, inherent problems encountered when analysing single tables are amplified with the generation of multiblock datasets. Finding the relationships between data layers of increasing complexity constitutes therefore a challenging task. In the present work, an algorithm is proposed for the supervised analysis of multiblock data structures. It associates the advantages of interpretability from the orthogonal partial least squares (OPLS) framework and the ability of common component and specific weights analysis (CCSWA) to weight each data table individually in order to grasp its specificities and handle efficiently the different sources of Y-orthogonal variation. Three applications are proposed for illustration purposes. A first example refers to a quantitative structure-activity relationship study aiming to predict the binding affinity of flavonoids toward the P-glycoprotein based on physicochemical properties. A second application concerns the integration of several groups of sensory attributes for overall quality assessment of a series of red wines. A third case study highlights the ability of the method to combine very large heterogeneous data blocks from Omics experiments in systems biology. Results were compared to the reference multiblock partial least squares (MBPLS) method to assess the performance of the proposed algorithm in terms of predictive ability and model interpretability. In all cases, ComDim-OPLS was demonstrated as a relevant data mining strategy for the simultaneous analysis of multiblock structures by accounting for specific variation sources in each dataset and providing a balance between predictive and descriptive purpose.

  9. Mass and Momentum Conservation of the Least-Squares Spectral Collocation Method for the Time-Dependent Stokes Equations

    NASA Astrophysics Data System (ADS)

    Kattelans, Thorsten; Heinrichs, Wilhelm

    2009-09-01

    For Stokes problems least-squares schemes have the big advantage that they require no stabilization and equal order interpolation can be used. The disadvantage of Least-Squares Finite Element Method (LSFEM) and of Least-Squares Spectral Element Method (LSSEM) is that they perform poorly with respect to conservation of mass for internal flow problems, where the LSSEM compensates this by a superior conservation of momentum. In the literature it has been shown that Least-Squares Spectral Collocation Method (LSSCM) leads to superior conservation of mass and momentum for the steady Stokes. Here, we extend the study to the time-dependent Stokes equations for an internal flow problem, where the domain is decomposed into different elements using the transfinite mapping of Gordon and Hall. Minimizing the influence of round-off errors we use QR decomposition for solving the resulting overdetermined algebraic systems instead of forming normal equations.

  10. Application of Partial Least Square (PLS) Regression to Determine Landscape-Scale Aquatic Resources Vulnerability in the Ozark Mountains

    EPA Science Inventory

    Partial least squares (PLS) analysis offers a number of advantages over the more traditionally used regression analyses applied in landscape ecology, particularly for determining the associations among multiple constituents of surface water and landscape configuration. Common dat...

  11. Application of Partial Least Squares (PLS) Regression to Determine Landscape-Scale Aquatic Resource Vulnerability in the Ozark Mountains

    EPA Science Inventory

    Partial least squares (PLS) analysis offers a number of advantages over the more traditionally used regression analyses applied in landscape ecology to study the associations among constituents of surface water and landscapes. Common data problems in ecological studies include: s...

  12. Correspondence and Least Squares Analyses of Soil and Rock Compositions for the Viking Lander 1 and Pathfinder Sites

    NASA Technical Reports Server (NTRS)

    Larsen, K. W.; Arvidson, R. E.; Jolliff, B. L.; Clark, B. C.

    2000-01-01

    Correspondence and Least Squares Mixing Analysis techniques are applied to the chemical composition of Viking 1 soils and Pathfinder rocks and soils. Implications for the parent composition of local and global materials are discussed.

  13. Extending the trend vector: The trend matrix and sample-based partial least squares

    NASA Astrophysics Data System (ADS)

    Sheridan, Robert P.; Nachbar, Robert B.; Bush, Bruce L.

    1994-06-01

    Trend vector analysis [Carhart, R.E. et al., J. Chem. Inf. Comput. Sci., 25 (1985) 64], in combination with topological descriptors such as atom pairs, has proved useful in drug discovery for ranking large collections of chemical compounds in order of predicted biological activity. The compounds with the highest predicted activities, upon being tested, often show a several-fold increase in the fraction of active compounds relative to a randomly selected set. A trend vector is simply the one-dimensional array of correlations between the biological activity of interest and a set of properties or `descriptors' of compounds in a training set. This paper examines two methods for generalizing the trend vector to improve the predicted rank order. The trend matrix method finds the correlations between the residuals and the simultaneous occurrence of descriptors, which are stored in a two-dimensional analog of the trend vector. The SAMPLS method derives a linear model by partial least squares (PLS), using the `sample-based' formulation of PLS [Bush, B.L. and Nachbar, R.B., J. Comput.-Aided Mol. Design, 7 (1993) 587] for efficiency in treating the large number of descriptors. PLS accumulates a predictive model as a sum of linear components. Expressed as a vector of prediction coefficients on properties, the first PLS component is proportional to the trend vector. Subsequent components adjust the model toward full least squares. For both methods the residuals decrease, while the risk of overfitting the training set increases. We therefore also describe statistical checks to prevent overfitting. These methods are applied to two data sets, a small homologous series of disubstituted piperidines, tested on the dopamine receptor, and a large set of diverse chemical structures, some of which are active at the muscarinic receptor. Each data set is split into a training set and a test set, and the activities in the test set are predicted from a fit on the training set. Both the trend

  14. Unlocking interpretation in near infrared multivariate calibrations by orthogonal partial least squares.

    PubMed

    Stenlund, Hans; Johansson, Erik; Gottfries, Johan; Trygg, Johan

    2009-01-01

    Near infrared spectroscopy (NIR) was developed primarily for applications such as the quantitative determination of nutrients in the agricultural and food industries. Examples include the determination of water, protein, and fat within complex samples such as grain and milk. Because of its useful properties, NIR analysis has spread to other areas such as chemistry and pharmaceutical production. NIR spectra consist of infrared overtones and combinations thereof, making interpretation of the results complicated. It can be very difficult to assign peaks to known constituents in the sample. Thus, multivariate analysis (MVA) has been crucial in translating spectral data into information, mainly for predictive purposes. Orthogonal partial least squares (OPLS), a new MVA method, has prediction and modeling properties similar to those of other MVA techniques, e.g., partial least squares (PLS), a method with a long history of use for the analysis of NIR data. OPLS provides an intrinsic algorithmic improvement for the interpretation of NIR data. In this report, four sets of NIR data were analyzed to demonstrate the improved interpretation provided by OPLS. The first two sets included simulated data to demonstrate the overall principles; the third set comprised a statistically replicated design of experiments (DoE), to demonstrate how instrumental difference could be accurately visualized and correctly attributed to Wood's anomaly phenomena; the fourth set was chosen to challenge the MVA by using data relating to powder mixing, a crucial step in the pharmaceutical industry prior to tabletting. Improved interpretation by OPLS was demonstrated for all four examples, as compared to alternative MVA approaches. It is expected that OPLS will be used mostly in applications where improved interpretation is crucial; one such area is process analytical technology (PAT). PAT involves fewer independent samples, i.e., batches, than would be associated with agricultural applications; in

  15. [Main Components of Xinjiang Lavender Essential Oil Determined by Partial Least Squares and Near Infrared Spectroscopy].

    PubMed

    Liao, Xiang; Wang, Qing; Fu, Ji-hong; Tang, Jun

    2015-09-01

    This work was undertaken to establish a quantitative analysis model which can rapid determinate the content of linalool, linalyl acetate of Xinjiang lavender essential oil. Totally 165 lavender essential oil samples were measured by using near infrared absorption spectrum (NIR), after analyzing the near infrared spectral absorption peaks of all samples, lavender essential oil have abundant chemical information and the interference of random noise may be relatively low on the spectral intervals of 7100~4500 cm(-1). Thus, the PLS models was constructed by using this interval for further analysis. 8 abnormal samples were eliminated. Through the clustering method, 157 lavender essential oil samples were divided into 105 calibration set samples and 52 validation set samples. Gas chromatography mass spectrometry (GC-MS) was used as a tool to determine the content of linalool and linalyl acetate in lavender essential oil. Then the matrix was established with the GC-MS raw data of two compounds in combination with the original NIR data. In order to optimize the model, different pretreatment methods were used to preprocess the raw NIR spectral to contrast the spectral filtering effect, after analysizing the quantitative model results of linalool and linalyl acetate, the root mean square error prediction (RMSEP) of orthogonal signal transformation (OSC) was 0.226, 0.558, spectrally, it was the optimum pretreatment method. In addition, forward interval partial least squares (FiPLS) method was used to exclude the wavelength points which has nothing to do with determination composition or present nonlinear correlation, finally 8 spectral intervals totally 160 wavelength points were obtained as the dataset. Combining the data sets which have optimized by OSC-FiPLS with partial least squares (PLS) to establish a rapid quantitative analysis model for determining the content of linalool and linalyl acetate in Xinjiang lavender essential oil, numbers of hidden variables of two

  16. An error analysis of least-squares finite element method of velocity-pressure-vorticity formulation for Stokes problem

    NASA Technical Reports Server (NTRS)

    Chang, Ching L.; Jiang, Bo-Nan

    1990-01-01

    A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.

  17. Fundamental solution of Laplace's equation in oblate spheroidal coordinates and Galerkin's matrix for Neumann's problem in Earth's gravity field studies

    NASA Astrophysics Data System (ADS)

    Holota, Petr; Nesvadba, Otakar

    2015-04-01

    In this paper the reciprocal distance is used for generating Galerkin's approximations in the weak solution of Neumann's problem that has an important role in Earth's gravity field studies. The reciprocal distance has a natural tie to the fundamental solution of Laplace's partial differential equation and in the paper it is represented by means of an expansion into a series of oblate spheroidal harmonics. Subsequently, the gradient vector of the reciprocal distance is constructed. In the computation of its components the expansion mentioned above is employed. The paper then focuses on the scalar product of reciprocal distance gradients in two different points and in particular on a series representation of a volume integral of the scalar product spread over an unbounded domain given by the exterior of an oblate spheroid (oblate ellipsoid of revolution). The integral yields the entries of Galerkin's matrix. The numerical interpretation of all the expansions used as well as the respective software implementation within the OpenCL framework is treated, which concerns also a numerical evaluation of Legendre functions of a real and an imaginary argument. In parallel an approximate closed formula expressing the entries of Galerkin's matrix (with an accuracy up to terms multiplied by the square of numerical eccentricity) is derived for convenience and comparison. The paper is added extensive numerical examples that illustrate the approach applied and demonstrate the accuracy of the derived formulas. Aspects related to practical applications are discussed.

  18. Parameterized least-squares attitude history estimation and magnetic field observations of the auroral spatial structures probe

    NASA Astrophysics Data System (ADS)

    Martineau, Ryan J.

    Terrestrial auroras are visible-light events caused by charged particles trapped by the Earth's magnetic field precipitating into the atmosphere along magnetic field lines near the poles. Auroral events are very dynamic, changing rapidly in time and across large spatial scales. Better knowledge of the flow of energy during an aurora will improve understanding of the heating processes in the atmosphere during geomagnetic and solar storms. The Auroral Spatial Structures Probe is a sounding rocket campaign to observe the middle-atmosphere plasma and electromagnetic environment during an auroral event with multipoint simultaneous measurements for fine temporal and spatial resolution. The auroral event in question occurred on January 28, 2015, with liftoff of the rocket at 10:41:01 UTC. The goal of this thesis is to produce clear observations of the magnetic field that may be used to model the current systems of the auroral event. To achieve this, the attitude of ASSP's 7 independent payloads must be estimated, and a new attitude determination method is attempted. The new solution uses nonlinear least-squares parameter estimation with a rigid-body dynamics simulation to determine attitude with an estimated accuracy of a few degrees. Observed magnetic field perturbations found using the new attitude solution are presented, where structures of the perturbations are consistent with previous observations and electromagnetic theory.

  19. Three-Dimensional Simulations of Marangoni-Benard Convection in Small Containers by the Least-Squares Finite Element Method

    NASA Technical Reports Server (NTRS)

    Yu, Sheng-Tao; Jiang, Bo-Nan; Wu, Jie; Duh, J. C.

    1996-01-01

    This paper reports a numerical study of the Marangoni-Benard (MB) convection in a planar fluid layer. The least-squares finite element method (LSFEM) is employed to solve the three-dimensional Stokes equations and the energy equation. First, the governing equations are reduced to be first-order by introducing variables such as vorticity and heat fluxes. The resultant first-order system is then cast into a div-curl-grad formulation, and its ellipticity and permissible boundary conditions are readily proved. This numerical approach provides an equal-order discretization for velocity, pressure, vorticity, temperature, and heat conduction fluxes, and therefore can provide high fidelity solutions for the complex flow physics of the MB convection. Numerical results reported include the critical Marangoni numbers (M(sub ac)) for the onset of the convection in containers with various aspect ratios, and the planforms of supercritical MB flows. The numerical solutions compared favorably with the experimental results reported by Koschmieder et al..

  20. Extraction of electron energy distribution functions from Langmuir probes using integrated step function response and regularized least squares solver

    NASA Astrophysics Data System (ADS)

    Elsaghir, Ahmed; Shannon, Steve

    2008-10-01

    Electron energy distribution function (EEDF) extraction from Langmuir probe data is an ill-posed problem due to the integral relationship between electron current and EEDF with respect to probe voltage. Curve fitting solutions to extract this EEDF assume a specific type of distribution. Point by point extraction of the second derivative relationship uses a small fraction of the integrated data to extract the EEDF. Recently EEDF extraction techniques have been evaluated using regularized solutions to the integral problem.ootnotetextGuti'errez-Tapia and Flores-Llamas, Phys. Plasmas 11 5102 (2004) These techniques do not assume any mathematical representation of the EEDF and solve the integral problem for any function that best represents the EEDF. In this paper the electron current for arbitrary functions is derived assuming that the electron density is a sum of step functions representing such a function. This technique for EEDF extraction is validated by adding noise to numerically generated data and using a regularized least squares method to calculate the original function by solving for the individual step function contribution to the total electron current. The methodology, reconstruction, and comparison to current best-known methods will be presented.

  1. Estimation of liver T₂ in transfusion-related iron overload in patients with weighted least squares T₂ IDEAL.

    PubMed

    Vasanawala, Shreyas S; Yu, Huanzhou; Shimakawa, Ann; Jeng, Michael; Brittain, Jean H

    2012-01-01

    MRI imaging of hepatic iron overload can be achieved by estimating T(2) values using multiple-echo sequences. The purpose of this work is to develop and clinically evaluate a weighted least squares algorithm based on T(2) Iterative Decomposition of water and fat with Echo Asymmetry and Least-squares estimation (IDEAL) technique for volumetric estimation of hepatic T(2) in the setting of iron overload. The weighted least squares T(2) IDEAL technique improves T(2) estimation by automatically decreasing the impact of later, noise-dominated echoes. The technique was evaluated in 37 patients with iron overload. Each patient underwent (i) a standard 2D multiple-echo gradient echo sequence for T(2) assessment with nonlinear exponential fitting, and (ii) a 3D T(2) IDEAL technique, with and without a weighted least squares fit. Regression and Bland-Altman analysis demonstrated strong correlation between conventional 2D and T(2) IDEAL estimation. In cases of severe iron overload, T(2) IDEAL without weighted least squares reconstruction resulted in a relative overestimation of T(2) compared with weighted least squares.

  2. Objective chemical fingerprinting of oil spills by partial least-squares discriminant analysis.

    PubMed

    Gómez-Carracedo, M P; Ferré, J; Andrade, J M; Fernández-Varela, R; Boqué, R

    2012-06-01

    An objective method based on partial least-squares discriminant analysis (PLS-DA) was used to assign an oil lump collected on the coastline to a suspected source. The approach is an add-on to current US and European oil fingerprinting standard procedures that are based on lengthy and rather subjective visual comparison of chromatograms. The procedure required an initial variable selection step using the selectivity ratio index (SRI) followed by a PLS-DA model. From the model, a "matching decision diagram" was established that yielded the four possible decisions that may arise from standard procedures (i.e., match, non-match, probable match, and inconclusive). The decision diagram included two limits, one derived from the Q-residuals of the samples of the target class and the other derived from the predicted y of the PLS model. The method was used classify 45 oil lumps collected on the Galician coast after the Prestige wreckage. The results compared satisfactorily with those from the standard methods.

  3. Least-squares reverse-time migration with cost-effective computation and memory storage

    NASA Astrophysics Data System (ADS)

    Liu, Xuejian; Liu, Yike; Huang, Xiaogang; Li, Peng

    2016-06-01

    Least-squares reverse-time migration (LSRTM), which involves several iterations of reverse-time migration (RTM) and Born modeling procedures, can provide subsurface images with better balanced amplitudes, higher resolution and fewer artifacts than standard migration. However, the same source wavefield is repetitively computed during the Born modeling and RTM procedures of different iterations. We developed a new LSRTM method with modified excitation-amplitude imaging conditions, where the source wavefield for RTM is forward propagated only once while the maximum amplitude and its excitation-time at each grid are stored. Then, the RTM procedure of different iterations only involves: (1) backward propagation of the residual between Born modeled and acquired data, and (2) implementation of the modified excitation-amplitude imaging condition by multiplying the maximum amplitude by the back propagated data residuals only at the grids that satisfy the imaging time at each time-step. For a complex model, 2 or 3 local peak-amplitudes and corresponding traveltimes should be confirmed and stored for all the grids so that multiarrival information of the source wavefield can be utilized for imaging. Numerical experiments on a three-layer and the Marmousi2 model demonstrate that the proposed LSRTM method saves huge computation and memory cost.

  4. A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting.

    PubMed

    Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue

    2016-07-29

    The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals.

  5. Prediction of olive oil sensory descriptors using instrumental data fusion and partial least squares (PLS) regression.

    PubMed

    Borràs, Eva; Ferré, Joan; Boqué, Ricard; Mestres, Montserrat; Aceña, Laura; Calvo, Angels; Busto, Olga

    2016-08-01

    Headspace-Mass Spectrometry (HS-MS), Fourier Transform Mid-Infrared spectroscopy (FT-MIR) and UV-Visible spectrophotometry (UV-vis) instrumental responses have been combined to predict virgin olive oil sensory descriptors. 343 olive oil samples analyzed during four consecutive harvests (2010-2014) were used to build multivariate calibration models using partial least squares (PLS) regression. The reference values of the sensory attributes were provided by expert assessors from an official taste panel. The instrumental data were modeled individually and also using data fusion approaches. The use of fused data with both low- and mid-level of abstraction improved PLS predictions for all the olive oil descriptors. The best PLS models were obtained for two positive attributes (fruity and bitter) and two defective descriptors (fusty and musty), all of them using data fusion of MS and MIR spectral fingerprints. Although good predictions were not obtained for some sensory descriptors, the results are encouraging, specially considering that the legal categorization of virgin olive oils only requires the determination of fruity and defective descriptors.

  6. A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting

    PubMed Central

    Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue

    2016-01-01

    The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals. PMID:27483275

  7. Least squares twin support vector machine with Universum data for classification

    NASA Astrophysics Data System (ADS)

    Xu, Yitian; Chen, Mei; Li, Guohui

    2016-11-01

    Universum, a third class not belonging to either class of the classification problem, allows to incorporate the prior knowledge into the learning process. A lot of previous work have demonstrated that the Universum is helpful to the supervised and semi-supervised classification. Moreover, Universum has already been introduced into the support vector machine (SVM) and twin support vector machine (TSVM) to enhance the generalisation performance. To further increase the generalisation performance, we propose a least squares TSVM with Universum data (?-TSVM) in this paper. Our ?-TSVM possesses the following advantages: first, it exploits Universum data to improve generalisation performance. Besides, it implements the structural risk minimisation principle by adding a regularisation to the objective function. Finally, it costs less computing time by solving two small-sized systems of linear equations instead of a single larger-sized quadratic programming problem. To verify the validity of our proposed algorithm, we conduct various experiments around the size of labelled samples and the number of Universum data on data-sets including seven benchmark data-sets, Toy data, MNIST and Face images. Empirical experiments indicate that Universum contributes to making prediction accuracy improved even stable. Especially when fewer labelled samples given, ?-TSVM is far superior to the improved LS-TSVM (ILS-TSVM), and slightly superior to the ?-TSVM.

  8. Generalized generating function with tucker decomposition and alternating least squares for underdetermined blind identification

    NASA Astrophysics Data System (ADS)

    Gu, Fanglin; Zhang, Hang; Wang, Wenwu; Zhu, Desheng

    2013-12-01

    Generating function (GF) has been used in blind identification for real-valued signals. In this paper, the definition of GF is first generalized for complex-valued random variables in order to exploit the statistical information carried on complex signals in a more effective way. Then an algebraic structure is proposed to identify the mixing matrix from underdetermined mixtures using the generalized generating function (GGF). Two methods, namely GGF-ALS and GGF-TALS, are developed for this purpose. In the GGF-ALS method, the mixing matrix is estimated by the decomposition of the tensor constructed from the Hessian matrices of the GGF of the observations, using an alternating least squares (ALS) algorithm. The GGF-TALS method is an improved version of the GGF-ALS algorithm based on Tucker decomposition. More specifically, the original tensor, as formed in GGF-ALS, is first converted to a lower-rank core tensor using the Tucker decomposition, where the factors are obtained by the left singular-value decomposition of the original tensor's mode-3 matrix. Then the mixing matrix is estimated by decomposing the core tensor with the ALS algorithm. Simulation results show that (a) the proposed GGF-ALS and GGF-TALS approaches have almost the same performance in terms of the relative errors, whereas the GGF-TALS has much lower computational complexity, and (b) the proposed GGF algorithms have superior performance to the latest GF-based baseline approaches.

  9. Partial Least Square Discriminant Analysis Discovered a Dietary Pattern Inversely Associated with Nasopharyngeal Carcinoma Risk

    PubMed Central

    Lo, Yen-Li; Pan, Wen-Harn; Hsu, Wan-Lun; Chien, Yin-Chu; Chen, Jen-Yang; Hsu, Mow-Ming; Lou, Pei-Jen; Chen, I-How; Hildesheim, Allan; Chen, Chien-Jen

    2016-01-01

    Evidence on the association between dietary component, dietary pattern and nasopharyngeal carcinoma (NPC) is scarce. A major challenge is the high degree of correlation among dietary constituents. We aimed to identify dietary pattern associated with NPC and to illustrate the dose-response relationship between the identified dietary pattern scores and the risk of NPC. Taking advantage of a matched NPC case–control study, data from a total of 319 incident cases and 319 matched controls were analyzed. Dietary pattern was derived employing partial least square discriminant analysis (PLS-DA) performed on energy-adjusted food frequencies derived from a 66-item food-frequency questionnaire. Odds ratios (ORs) and 95% confidence intervals (CIs) were estimated with multiple conditional logistic regression models, linking pattern scores and NPC risk. A high score of the PLS-DA derived pattern was characterized by high intakes of fruits, milk, fresh fish, vegetables, tea, and eggs ordered by loading values. We observed that one unit increase in the scores was associated with a significantly lower risk of NPC (ORadj = 0.73, 95% CI = 0.60–0.88) after controlling for potential confounders. Similar results were observed among Epstein-Barr virus seropositive subjects. An NPC protective diet is indicated with more phytonutrient-rich plant foods (fruits, vegetables), milk, other protein-rich foods (in particular fresh fish and eggs), and tea. This information may be used to design potential dietary regimen for NPC prevention. PMID:27249558

  10. Eddy current characterization of small cracks using least square support vector machine

    NASA Astrophysics Data System (ADS)

    Chelabi, M.; Hacib, T.; Le Bihan, Y.; Ikhlef, N.; Boughedda, H.; Mekideche, M. R.

    2016-04-01

    Eddy current (EC) sensors are used for non-destructive testing since they are able to probe conductive materials. Despite being a conventional technique for defect detection and localization, the main weakness of this technique is that defect characterization, of the exact determination of the shape and dimension, is still a question to be answered. In this work, we demonstrate the capability of small crack sizing using signals acquired from an EC sensor. We report our effort to develop a systematic approach to estimate the size of rectangular and thin defects (length and depth) in a conductive plate. The achieved approach by the novel combination of a finite element method (FEM) with a statistical learning method is called least square support vector machines (LS-SVM). First, we use the FEM to design the forward problem. Next, an algorithm is used to find an adaptive database. Finally, the LS-SVM is used to solve the inverse problems, creating polynomial functions able to approximate the correlation between the crack dimension and the signal picked up from the EC sensor. Several methods are used to find the parameters of the LS-SVM. In this study, the particle swarm optimization (PSO) and genetic algorithm (GA) are proposed for tuning the LS-SVM. The results of the design and the inversions were compared to both simulated and experimental data, with accuracy experimentally verified. These suggested results prove the applicability of the presented approach.

  11. Least-squares reverse time migration with and without source wavelet estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Qingchen; Zhou, Hui; Chen, Hanming; Wang, Jie

    2016-11-01

    Least-squares reverse time migration (LSRTM) attempts to find the best fit reflectivity model by minimizing the mismatching between the observed and simulated seismic data, where the source wavelet estimation is one of the crucial issues. We divide the frequency-domain observed seismic data by the numerical Green's function at the receiver nodes to estimate the source wavelet for the conventional LSRTM method, and propose the source-independent LSRTM based on a convolution-based objective function. The numerical Green's function can be simulated with a dirac wavelet and the migration velocity in the frequency or time domain. Compared to the conventional method with the additional source estimation procedure, the source-independent LSRTM is insensitive to the source wavelet and can still give full play to the amplitude-preserving ability even using an incorrect wavelet without the source estimation. In order to improve the anti-noise ability, we apply the robust hybrid norm objective function to both the methods and use the synthetic seismic data contaminated by the random Gaussian and spike noises with a signal-to-noise ratio of 5 dB to verify their feasibilities. The final migration images show that the source-independent algorithm is more robust and has a higher amplitude-preserving ability than the conventional source-estimated method.

  12. Interpolation of Superconducting Gravity Observations Using Least-Squares Collocation Method

    NASA Astrophysics Data System (ADS)

    Habel, Branislav; Janak, Juraj

    2014-05-01

    A pre-processing of the gravity data measured by superconducting gravimeter involves removing of spikes, offsets and gaps. Their presence in observations can limit the data analysis and degrades the quality of obtained results. Short data gaps are filling by theoretical signal in order to get continuous records of gravity. It requires the accurate tidal model and eventually atmospheric pressure at the observed site. The poster presents a design of algorithm for interpolation of gravity observations with a sampling rate of 1 min. Novel approach is based on least-squares collocation which combines adjustment of trend parameters, filtering of noise and prediction. It allows the interpolation of missing data up to a few hours without necessity of any other information. Appropriate parameters for covariance function are found using a Bayes' theorem by modified optimization process. Accuracy of method is improved by the rejection of outliers before interpolation. For filling of longer gaps the collocation model is combined with theoretical tidal signal for the rigid Earth. Finally, the proposed method was tested on the superconducting gravity observations at several selected stations of Global Geodynamics Project. Testing demonstrates its reliability and offers results comparable with the standard approach implemented in ETERNA software package without necessity of an accurate tidal model.

  13. An iterative Kalman smoother/least-squares algorithm for the identification of delta-ARX models

    NASA Astrophysics Data System (ADS)

    Chadwick, M. A.; Anderson, S. R.; Kadirkamanathan, V.

    2010-07-01

    Additive measurement noise on the output signal is a significant problem in the δ-domain and disrupts parameter estimation of auto-regressive exogenous (ARX) models. This article deals with the identification of δ-domain linear time-invariant models of ARX structure (i.e. driven by known input signals and additive process noise) by using an iterative identification scheme, where the output is also corrupted by additive measurement noise. The identification proceeds by mapping the ARX model into a canonical state-space framework, where the states are the measurement noise-free values of the underlying variables. A consequence of this mapping is that the original parameter estimation task becomes one of both a state and parameter estimation problem. The algorithm steps between state estimation using a Kalman smoother and parameter estimation using least squares. This approach is advantageous as it avoids directly differencing the noise-corrupted 'raw' signals for use in the estimation phase and uses different techniques to the common parametric low-pass filters in the literature. Results of the algorithm applied to a simulation test problem as well as a real-world problem are given, and show that the algorithm converges quite rapidly and with accurate results.

  14. Nucleus detection using gradient orientation information and linear least squares regression

    NASA Astrophysics Data System (ADS)

    Kwak, Jin Tae; Hewitt, Stephen M.; Xu, Sheng; Pinto, Peter A.; Wood, Bradford J.

    2015-03-01

    Computerized histopathology image analysis enables an objective, efficient, and quantitative assessment of digitized histopathology images. Such analysis often requires an accurate and efficient detection and segmentation of histological structures such as glands, cells and nuclei. The segmentation is used to characterize tissue specimens and to determine the disease status or outcomes. The segmentation of nuclei, in particular, is challenging due to the overlapping or clumped nuclei. Here, we propose a nuclei seed detection method for the individual and overlapping nuclei that utilizes the gradient orientation or direction information. The initial nuclei segmentation is provided by a multiview boosting approach. The angle of the gradient orientation is computed and traced for the nuclear boundaries. Taking the first derivative of the angle of the gradient orientation, high concavity points (junctions) are discovered. False junctions are found and removed by adopting a greedy search scheme with the goodness-of-fit statistic in a linear least squares sense. Then, the junctions determine boundary segments. Partial boundary segments belonging to the same nucleus are identified and combined by examining the overlapping area between them. Using the final set of the boundary segments, we generate the list of seeds in tissue images. The method achieved an overall precision of 0.89 and a recall of 0.88 in comparison to the manual segmentation.

  15. An efficient recursive least square-based condition monitoring approach for a rail vehicle suspension system

    NASA Astrophysics Data System (ADS)

    Liu, X. Y.; Alfi, S.; Bruni, S.

    2016-06-01

    A model-based condition monitoring strategy for the railway vehicle suspension is proposed in this paper. This approach is based on recursive least square (RLS) algorithm focusing on the deterministic 'input-output' model. RLS has Kalman filtering feature and is able to identify the unknown parameters from a noisy dynamic system by memorising the correlation properties of variables. The identification of suspension parameter is achieved by machine learning of the relationship between excitation and response in a vehicle dynamic system. A fault detection method for the vertical primary suspension is illustrated as an instance of this condition monitoring scheme. Simulation results from the rail vehicle dynamics software 'ADTreS' are utilised as 'virtual measurements' considering a trailer car of Italian ETR500 high-speed train. The field test data from an E464 locomotive are also employed to validate the feasibility of this strategy for the real application. Results of the parameter identification performed indicate that estimated suspension parameters are consistent or approximate with the reference values. These results provide the supporting evidence that this fault diagnosis technique is capable of paving the way for the future vehicle condition monitoring system.

  16. Combined Helmholtz equation-least squares method for reconstructing acoustic radiation from arbitrarily shaped objects.

    PubMed

    Wu, Sean F; Zhao, Xiang

    2002-07-01

    A combined Helmholtz equation-least squares (CHELS) method is developed for reconstructing acoustic radiation from an arbitrary object. This method combines the advantages of both the HELS method and the Helmholtz integral theory based near-field acoustic holography (NAH). As such it allows for reconstruction of the acoustic field radiated from an arbitrary object with relatively few measurements, thus significantly enhancing the reconstruction efficiency. The first step in the CHELS method is to establish the HELS formulations based on a finite number of acoustic pressure measurements taken on or beyond a hypothetical spherical surface that encloses the object under consideration. Next enough field acoustic pressures are generated using the HELS formulations and taken as the input to the Helmholtz integral formulations implemented through the boundary element method (BEM). The acoustic pressure and normal component of the velocity at the discretized nodes on the surface are then determined by solving two matrix equations using singular value decomposition (SVD) and regularization techniques. Also presented are in-depth analyses of the advantages and limitations of the CHELS method. Examples of reconstructing acoustic radiation from separable and nonseparable surfaces are demonstrated.

  17. On the choice of expansion functions in the Helmholtz equation least-squares method.

    PubMed

    Semenova, Tatiana; Wu, Sean F

    2005-02-01

    This paper examines the performance of Helmholtz equation least-squares (HELS) method in reconstructing acoustic radiation from an arbitrary source by using three different expansions, namely, localized spherical waves (LSW), distributed spherical waves (DSW), and distributed point sources (DPS), under the same set of measurements. The reconstructed acoustic pressures are validated against the benchmark data measured at the same locations as reconstruction points for frequencies up to 3275 Hz. Reconstruction is obtained by using Tikhonov regularization or its modification with the regularization parameter selected by error-free parameter-choice methods. The impact of the number of measurement points on the resultant reconstruction accuracy under different expansion functions is investigated. Results demonstrate that DSW leads to a better-conditioned transfer matrix, yields more accurate reconstruction than both LSW and DPS, and is not affected as much by the change in measurement points. Also, it is possible to obtain optimal locations of the auxiliary sources for DSW, LSW, and DPS by taking an independent layer of measurements. Use of these auxiliary sources and an optimal combination of regularization and error-free parameter choice methods can yield a satisfactory reconstruction of acoustic quantities on the source surfaces as well as in the field in the most cost-effective manner.

  18. On reconstruction of acoustic pressure fields using the Helmholtz equation least squares method

    PubMed

    Wu

    2000-05-01

    This paper presents analyses and implementation of the reconstruction of acoustic pressure fields radiated from a general, three-dimensional complex vibrating structure using the Helmholtz equation least-squares (HELS) method. The structure under consideration emulates a full-size four-cylinder engine. To simulate sound radiation from a vibrating structure, harmonic excitations are assumed to act on arbitrarily selected surfaces. The resulting vibration responses are solved by the commercial FEM (finite element method) software I-DEAS. Once the normal component of the surface velocity distribution is determined, the surface acoustic pressures are calculated using standard boundary element method (BEM) codes. The radiated acoustic pressures over several planar surfaces at certain distances from the source are calculated by the Helmholtz integral formulation. These field pressures are taken as the input to the HELS formulation to reconstruct acoustic pressures on the entire source surface, as well as in the field. The reconstructed acoustic pressures thus obtained are then compared with benchmark values. Numerical results demonstrate that good agreements can be obtained with relatively few expansion functions. The HELS method is shown to be very effective in the low-to-mid frequency regime, and can potentially become a powerful noise diagnostic tool.

  19. Weighted least-squares solver for determining pressure from particle image velocimetry data

    NASA Astrophysics Data System (ADS)

    de Kat, Roeland

    2016-11-01

    Currently, most approaches to determine pressure from particle image velocimetry data are Poisson approaches (e.g.) or multi-pass marching approaches (e.g.). However, these approaches deal with boundary conditions in their specific ways which cannot easily be changed-Poisson approaches enforce boundary conditions strongly, whereas multi-pass marching approaches enforce them weakly. Under certain conditions (depending on the certainty of the data or availability of reference data along the boundary) both types of boundary condition enforcement have to be used together to obtain the best result. In addition, neither of the approaches takes the certainty of the particle image velocimetry data (see e.g.) within the domain into account. Therefore, to address these shortcomings and improve upon current approaches, a new approach is proposed using weighted least-squares. The performance of this new approach is tested on synthetic and experimental particle image velocimetry data. Preliminary results show that a significant improvement can be made in determining pressure fields using the new approach. RdK is supported by a Leverhulme Trust Early Career Fellowship.

  20. Least-squares harmonic estimation of the tropopause parameters using GPS radio occultation measurements

    NASA Astrophysics Data System (ADS)

    Sharifi, Mohammad Ali; Sam Khaniani, Ali; Masoumi, Salim; Schmidt, Torsten; Wickert, Jens

    2013-04-01

    In order to investigate temporal variations of the tropopause parameters, Least-Squares Harmonic Estimation (LS-HE) is applied to the time series of the tropopause temperatures and heights derived from Global Positioning System Radio Occultation (GPS RO) atmospheric profiles of CHAMP, GRACE and COSMIC missions from January 2006 until May 2010 in different regions of Iran. By applying the univariate LS-HE to the completely unevenly spaced time series of the tropopause temperatures and heights, annual and diurnal components are detected together with their higher harmonics. The multivariate LS-HE estimates the main periodic signals, particularly diurnal and semidiurnal cycles, more clearly than the univariate LS-HE. Mixing in the values of the tropopause height and temperature is seen to occur in winter at lower latitudes (around 30°) as a result of subtropical jet, and in summer at higher latitudes (36°-42°) as an effect of subtropical high. A bimodal pattern is observed in the frequency histograms of the tropopause heights, in which the primary modes for the southern and northern parts of Iran correspond to subtropical and extratropical heights, respectively.

  1. Realizations and performances of least-squares estimation and Kalman filtering by systolic arrays

    SciTech Connect

    Chen, M.J.

    1987-01-01

    Fast least-squares (LS) estimation and Kalman-filtering algorithms utilizing systolic-array implementation are studied. Based on a generalized systolic QR algorithm, a modified LS method is proposed and shown to have superior computational and inter-cell connection complexities, and is more practical for systolic-array implementation. After whitening processing, the Kalman filter can be formulated as a SRIF data-processing problem followed by a simple LS operation. This approach simplifies the computational structure, and is more reliable when the system has singular or near singular coefficient matrix. To improve the throughput rate of the systolic Kalman filter, a topology for stripe QR processing is also proposed. By skewing the order of input matrices, a fully pipelined systolic Kalman-filtering operation can be achieved. With the number of processing units of the O(n/sup 2/), the system throughput rate becomes of the O(n). The numerical properties of the systolic LS estimation and the Kalman filtering algorithms under finite word-length effect are studied via analysis and computer simulations, and are compared with that of conventional approaches. Fault tolerance of the LS estimation algorithm is also discussed. It is shown that by using a simple bypass register, reasonable estimation performance is still possible for a transient defective processing unit.

  2. Soft sensor modelling by time difference, recursive partial least squares and adaptive model updating

    NASA Astrophysics Data System (ADS)

    Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.

    2017-04-01

    To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.

  3. ADMM-EM Method for L1-Norm Regularized Weighted Least Squares PET Reconstruction

    PubMed Central

    2016-01-01

    The L1-norm regularization is usually used in positron emission tomography (PET) reconstruction to suppress noise artifacts while preserving edges. The alternating direction method of multipliers (ADMM) is proven to be effective for solving this problem. It sequentially updates the additional variables, image pixels, and Lagrangian multipliers. Difficulties lie in obtaining a nonnegative update of the image. And classic ADMM requires updating the image by greedy iteration to minimize the cost function, which is computationally expensive. In this paper, we consider a specific application of ADMM to the L1-norm regularized weighted least squares PET reconstruction problem. Main contribution is derivation of a new approach to iteratively and monotonically update the image while self-constraining in the nonnegativity region and the absence of a predetermined step size. We give a rigorous convergence proof on the quadratic subproblem of the ADMM algorithm considered in the paper. A simplified version is also developed by replacing the minima of the image-related cost function by one iteration that only decreases it. The experimental results show that the proposed algorithm with greedy iterations provides a faster convergence than other commonly used methods. Furthermore, the simplified version gives a comparable reconstructed result with far lower computational costs. PMID:27840655

  4. Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression

    SciTech Connect

    Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards; New, Joshua Ryan

    2013-01-01

    Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-fold cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.

  5. Radial Basis Function-Sparse Partial Least Squares for Application to Brain Imaging Data

    PubMed Central

    Yoshida, Hisako; Kawaguchi, Atsushi

    2013-01-01

    Magnetic resonance imaging (MRI) data is an invaluable tool in brain morphology research. Here, we propose a novel statistical method for investigating the relationship between clinical characteristics and brain morphology based on three-dimensional MRI data via radial basis function-sparse partial least squares (RBF-sPLS). Our data consisted of MRI image intensities for multimillion voxels in a 3D array along with 73 clinical variables. This dataset represents a suitable application of RBF-sPLS because of a potential correlation among voxels as well as among clinical characteristics. Additionally, this method can simultaneously select both effective brain regions and clinical characteristics based on sparse modeling. This is in contrast to existing methods, which consider prespecified brain regions because of the computational difficulties involved in processing high-dimensional data. RBF-sPLS employs dimensionality reduction in order to overcome this obstacle. We have applied RBF-sPLS to a real dataset composed of 102 chronic kidney disease patients, while a comparison study used a simulated dataset. RBF-sPLS identified two brain regions of interest from our patient data: the temporal lobe and the occipital lobe, which are associated with aging and anemia, respectively. Our simulation study suggested that such brain regions are extracted with excellent accuracy using our method. PMID:23762188

  6. Pole coordinates data prediction by combination of least squares extrapolation and double autoregressive prediction

    NASA Astrophysics Data System (ADS)

    Kosek, Wieslaw

    2016-04-01

    Future Earth Orientation Parameters data are needed to compute real time transformation between the celestial and terrestrial reference frames. This transformation is realized by predictions of x, y pole coordinates data, UT1-UTC data and precesion-nutation extrapolation model. This paper is focused on the pole coordinates data prediction by combination of the least-squares (LS) extrapolation and autoregressive (AR) prediction models (LS+AR). The AR prediction which is applied to the LS extrapolation residuals of pole coordinates data does not able to predict all frequency bands of them and it is mostly tuned to predict subseasonal oscillations. The absolute values of differences between pole coordinates data and their LS+AR predictions increase with prediction length and depend mostly on starting prediction epochs, thus time series of these differences for 2, 4 and 8 weeks in the future were analyzed. Time frequency spectra of these differences for different prediction lengths are very similar showing some power in the frequency band corresponding to the prograde Chandler and annual oscillations, which means that the increase of prediction errors is caused by mismodelling of these oscillations by the LS extrapolation model. Thus, the LS+AR prediction method can be modified by taking into additional AR prediction correction computed from time series of these prediction differences for different prediction lengths. This additional AR prediction is mostly tuned to the seasonal frequency band of pole coordinates data.

  7. A technique to improve the accuracy of Earth orientation prediction algorithms based on least squares extrapolation

    NASA Astrophysics Data System (ADS)

    Guo, J. Y.; Li, Y. B.; Dai, C. L.; Shum, C. K.

    2013-10-01

    We present a technique to improve the least squares (LS) extrapolation of Earth orientation parameters (EOPs), consisting of fixing the last observed data point on the LS extrapolation curve, which customarily includes a polynomial and a few sinusoids. For the polar motion (PM), a more sophisticated two steps approach has been developed, which consists of estimating the amplitude of the more stable one of the annual (AW) and Chandler (CW) wobbles using data of longer time span, and then estimating the other parameters using a shorter time span. The technique is studied using hindcast experiments, and justified using year-by-year statistics of 8 years. In order to compare with the official predictions of the International Earth Rotation and Reference Systems Service (IERS) performed at the U.S. Navy Observatory (USNO), we have enforced short-term predictions by applying the ARIMA method to the residuals computed by subtracting the LS extrapolation curve from the observation data. The same as at USNO, we have also used atmospheric excitation function (AEF) to further improve predictions of UT1-UTC. As results, our short-term predictions are comparable to the USNO predictions, and our long-term predictions are marginally better, although not for every year. In addition, we have tested the use of AEF and oceanic excitation function (OEF) in PM prediction. We find that use of forecasts of AEF alone does not lead to any apparent improvement or worsening, while use of forecasts of AEF + OEF does lead to apparent improvement.

  8. Texture discrimination of green tea categories based on least squares support vector machine (LSSVM) classifier

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; He, Yong; Qiu, Zhengjun; Wu, Di

    2008-03-01

    This research aimed for development multi-spectral imaging technique for green tea categories discrimination based on texture analysis. Three key wavelengths of 550, 650 and 800 nm were implemented in a common-aperture multi-spectral charged coupled device camera, and images were acquired for 190 unique images in a four different kinds of green tea data set. An image data set consisting of 15 texture features for each image was generated based on texture analysis techniques including grey level co-occurrence method (GLCM) and texture filtering. For optimization the texture features, 5 features that weren't correlated with the category of tea were eliminated. Unsupervised cluster analysis was conducted using the optimized texture features based on principal component analysis. The cluster analysis showed that the four kinds of green tea could be separated in the first two principal components space, however there was overlapping phenomenon among the different kinds of green tea. To enhance the performance of discrimination, least squares support vector machine (LSSVM) classifier was developed based on the optimized texture features. The excellent discrimination performance for sample in prediction set was obtained with 100%, 100%, 75% and 100% for four kinds of green tea respectively. It can be concluded that texture discrimination of green tea categories based on multi-spectral image technology is feasible.

  9. Detection of main tidal frequencies using least squares harmonic estimation method

    NASA Astrophysics Data System (ADS)

    Mousavian, R.; Hossainali, M. Mashhadi

    2012-11-01

    In this paper the efficiency of the method of Least Squares Harmonic Estimation (LS-HE) for detecting the main tidal frequencies is investigated. Using this method, the tidal spectrum of the sea level data is evaluated at two tidal stations: Bandar Abbas in south of Iran and Workington on the eastern coast of the UK. The amplitudes of the tidal constituents at these two tidal stations are not the same. Moreover, in contrary to the Workington station, the Bandar Abbas tidal record is not an equispaced time series. Therefore, the analysis of the hourly tidal observations in Bandar Abbas and Workington can provide a reasonable insight into the efficiency of this method for analyzing the frequency content of tidal time series. Furthermore, applying the method of Fourier transform to the Workington tidal record provides an independent source of information for evaluating the tidal spectrum proposed by the LS-HE method. According to the obtained results, the spectrums of these two tidal records contain the components with the maximum amplitudes among the expected ones in this time span and some new frequencies in the list of known constituents. In addition, in terms of frequencies with maximum amplitude; the power spectrums derived from two aforementioned methods are the same. These results demonstrate the ability of LS-HE for identifying the frequencies with maximum amplitude in both tidal records.

  10. Rapid Quantitative Analysis of Forest Biomass Using Fourier Transform Infrared Spectroscopy and Partial Least Squares Regression

    PubMed Central

    Fasina, Oladiran O.; Eckhardt, Lori G.

    2016-01-01

    Fourier transform infrared reflectance (FTIR) spectroscopy has been used to predict properties of forest logging residue, a very heterogeneous feedstock material. Properties studied included the chemical composition, thermal reactivity, and energy content. The ability to rapidly determine these properties is vital in the optimization of conversion technologies for the successful commercialization of biobased products. Partial least squares regression of first derivative treated FTIR spectra had good correlations with the conventionally measured properties. For the chemical composition, constructed models generally did a better job of predicting the extractives and lignin content than the carbohydrates. In predicting the thermochemical properties, models for volatile matter and fixed carbon performed very well (i.e., R2 > 0.80, RPD > 2.0). The effect of reducing the wavenumber range to the fingerprint region for PLS modeling and the relationship between the chemical composition and higher heating value of logging residue were also explored. This study is new and different in that it is the first to use FTIR spectroscopy to quantitatively analyze forest logging residue, an abundant resource that can be used as a feedstock in the emerging low carbon economy. Furthermore, it provides a complete and systematic characterization of this heterogeneous raw material. PMID:28003929

  11. Differentiation of Pueraria lobata and Pueraria thomsonii using partial least square discriminant analysis (PLS-DA).

    PubMed

    Wong, Ka H; Razmovski-Naumovski, Valentina; Li, Kong M; Li, George Q; Chan, Kelvin

    2013-10-01

    The aims of the study were to differentiate Pueraria lobata from its related species Pueraria thomsonii and to examine the raw herbal material used in manufacturing kudzu root granules using partial least square discriminant analysis (PLS-DA). Sixty-four raw materials of P. lobata and P. thomsonii and kudzu root-labelled granules were analysed by ultra performance liquid chromatography. To differentiate P. lobata from P. thomsonii, PLS-DA models using the variables selected from the entire chromatograms, genetic algorithm (GA), successive projection algorithm (SPA), puerarin alone and six selected peaks were employed. The models constructed by GA and SPA demonstrated superior classification ability and lower model's complexity as compared to the model based on the entire chromatographic matrix, whilst the model constructed by the six selected peaks was comparable to the entire chromatographic model. The model established by puerarin alone showed inferior classification ability. In addition, the PLS-DA models constructed by the entire chromatographic matrix, GA, SPA and the six selected peaks showed that four brands out of seventeen granules were mislabelled as P. lobata. In conclusion, PLS-DA is a promising procedure for differentiating Pueraria species and determining raw material used in commercial products.

  12. Comparison of structural and least-squares lines for estimating geologic relations

    USGS Publications Warehouse

    Williams, G.P.; Troutman, B.M.

    1990-01-01

    Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.

  13. On the least-square estimation of parameters for statistical diffusion weighted imaging model.

    PubMed

    Yuan, Jing; Zhang, Qinwei

    2013-01-01

    Statistical model for diffusion-weighted imaging (DWI) has been proposed for better tissue characterization by introducing a distribution function for apparent diffusion coefficients (ADC) to account for the restrictions and hindrances to water diffusion in biological tissues. This paper studies the precision and uncertainty in the estimation of parameters for statistical DWI model with Gaussian distribution, i.e. the position of distribution maxima (Dm) and the distribution width (σ), by using non-linear least-square (NLLS) fitting. Numerical simulation shows that precise parameter estimation, particularly for σ, imposes critical requirements on the extremely high signal-to-noise ratio (SNR) of DWI signal when NLLS fitting is used. Unfortunately, such extremely high SNR may be difficult to achieve for the normal setting of clinical DWI scan. For Dm and σ parameter mapping of in vivo human brain, multiple local minima are found and result in large uncertainties in the estimation of distribution width σ. The estimation error by using NLLS fitting originates primarily from the insensitivity of DWI signal intensity to distribution width σ, as given in the function form of the Gaussian-type statistical DWI model.

  14. Partial least squares correlation of multivariate cognitive abilities and local brain structure in children and adolescents.

    PubMed

    Ziegler, G; Dahnke, R; Winkler, A D; Gaser, C

    2013-11-15

    Intelligent behavior is not a one-dimensional phenomenon. Individual differences in human cognitive abilities might be therefore described by a 'cognitive manifold' of intercorrelated tests from partially independent domains of general intelligence and executive functions. However, the relationship between these individual differences and brain morphology is not yet fully understood. Here we take a multivariate approach to analyzing covariations across individuals in two feature spaces: the low-dimensional space of cognitive ability subtests and the high-dimensional space of local gray matter volume obtained from voxel-based morphometry. By exploiting a partial least squares correlation framework in a large sample of 286 healthy children and adolescents, we identify directions of maximum covariance between both spaces in terms of latent variable modeling. We obtain an orthogonal set of latent variables representing commonalities in the brain-behavior system, which emphasize specific neuronal networks involved in cognitive ability differences. We further explore the early lifespan maturation of the covariance between cognitive abilities and local gray matter volume. The dominant latent variable revealed positive weights across widespread gray matter regions (in the brain domain) and the strongest weights for parents' ratings of children's executive function (in the cognitive domain). The obtained latent variables for brain and cognitive abilities exhibited moderate correlations of 0.46-0.6. Moreover, the multivariate modeling revealed indications for a heterochronic formation of the association as a process of brain maturation across different age groups.

  15. Penalized weighted least-squares image reconstruction for dual energy X-ray transmission tomography.

    PubMed

    Sukovic, P; Clinthorne, N H

    2000-11-01

    We present a dual-energy (DE) transmission computed tomography (CT) reconstruction method. It is statistically motivated and features nonnegativity constraints in the density domain. A penalized weighted least squares (PWLS) objective function has been chosen to handle the non-Poisson noise added by amorphous silicon (aSi:H) detectors. A Gauss-Seidel algorithm has been used to minimize the objective function. The behavior of the method in terms of bias/standard deviation tradeoff has been compared to that of a DE method that is based on filtered back projection (FBP). The advantages of the DE PWLS method are largest for high noise and/or low flux cases. Qualitative results suggest this as well. Also, the reconstructed images of an object with opaque regions are presented. Possible applications of the method are: attenuation correction for positron emission tomography (PET) images, various quantitative computed tomography (QCT) methods such as bone mineral densitometry (BMD), and the removal of metal streak artifacts.

  16. A modified Generalized Least Squares method for large scale nuclear data evaluation

    NASA Astrophysics Data System (ADS)

    Schnabel, Georg; Leeb, Helmut

    2017-01-01

    Nuclear data evaluation aims to provide estimates and uncertainties in the form of covariance matrices of cross sections and related quantities. Many practitioners use the Generalized Least Squares (GLS) formulas to combine experimental data and results of model calculations in order to determine reliable estimates and covariance matrices. A prerequisite to apply the GLS formulas is the construction of a prior covariance matrix for the observables from a set of model calculations. Modern nuclear model codes are able to provide predictions for a large number of observables. However, the inclusion of all observables may lead to a prior covariance matrix of intractable size. Therefore, we introduce mathematically equivalent versions of the GLS formulas to avoid the construction of the prior covariance matrix. Experimental data can be incrementally incorporated into the evaluation process, hence there is no upper limit on their amount. We demonstrate the modified GLS method in a tentative evaluation involving about three million observables using the code TALYS. The revised scheme is well suited as building block of a database application providing evaluated nuclear data. Updating with new experimental data is feasible and users can query estimates and correlations of arbitrary subsets of the observables stored in the database.

  17. [Band depth analysis and partial least square regression based winter wheat biomass estimation using hyperspectral measurements].

    PubMed

    Fu, Yuan-Yuan; Wang, Ji-Hua; Yang, Gui-Jun; Song, Xiao-Yu; Xu, Xin-Gang; Feng, Hai-Kuan

    2013-05-01

    The major limitation of using existing vegetation indices for crop biomass estimation is that it approaches a saturation level asymptotically for a certain range of biomass. In order to resolve this problem, band depth analysis and partial least square regression (PLSR) were combined to establish winter wheat biomass estimation model in the present study. The models based on the combination of band depth analysis and PLSR were compared with the models based on common vegetation indexes from the point of view of estimation accuracy, subsequently. Band depth analysis was conducted in the visible spectral domain (550-750 nm). Band depth, band depth ratio (BDR), normalized band depth index, and band depth normalized to area were utilized to represent band depth information. Among the calibrated estimation models, the models based on the combination of band depth analysis and PLSR reached higher accuracy than those based on the vegetation indices. Among them, the combination of BDR and PLSR got the highest accuracy (R2 = 0.792, RMSE = 0.164 kg x m(-2)). The results indicated that the combination of band depth analysis and PLSR could well overcome the saturation problem and improve the biomass estimation accuracy when winter wheat biomass is large.

  18. Entropy and generalized least square methods in assessment of the regional value of streamgages

    USGS Publications Warehouse

    Markus, M.; Vernon, Knapp H.; Tasker, Gary D.

    2003-01-01

    The Illinois State Water Survey performed a study to assess the streamgaging network in the State of Illinois. One of the important aspects of the study was to assess the regional value of each station through an assessment of the information transfer among gaging records for low, average, and high flow conditions. This analysis was performed for the main hydrologic regions in the State, and the stations were initially evaluated using a new approach based on entropy analysis. To determine the regional value of each station within a region, several information parameters, including total net information, were defined based on entropy. Stations were ranked based on the total net information. For comparison, the regional value of the same stations was assessed using the generalized least square regression (GLS) method, developed by the US Geological Survey. Finally, a hybrid combination of GLS and entropy was created by including a function of the negative net information as a penalty function in the GLS. The weights of the combined model were determined to maximize the average correlation with the results of GLS and entropy. The entropy and GLS methods were evaluated using the high-flow data from southern Illinois stations. The combined method was compared with the entropy and GLS approaches using the high-flow data from eastern Illinois stations. ?? 2003 Elsevier B.V. All rights reserved.

  19. Temporal parameter change of human postural control ability during upright swing using recursive least square method

    NASA Astrophysics Data System (ADS)

    Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi

    2009-12-01

    The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.

  20. Temporal parameter change of human postural control ability during upright swing using recursive least square method

    NASA Astrophysics Data System (ADS)

    Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi

    2010-01-01

    The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.

  1. Multi-frequency Phase Unwrap from Noisy Data: Adaptive Least Squares Approach

    NASA Astrophysics Data System (ADS)

    Katkovnik, Vladimir; Bioucas-Dias, José

    2010-04-01

    Multiple frequency interferometry is, basically, a phase acquisition strategy aimed at reducing or eliminating the ambiguity of the wrapped phase observations or, equivalently, reducing or eliminating the fringe ambiguity order. In multiple frequency interferometry, the phase measurements are acquired at different frequencies (or wavelengths) and recorded using the corresponding sensors (measurement channels). Assuming that the absolute phase to be reconstructed is piece-wise smooth, we use a nonparametric regression technique for the phase reconstruction. The nonparametric estimates are derived from a local least squares criterion, which, when applied to the multifrequency data, yields denoised (filtered) phase estimates with extended ambiguity (periodized), compared with the phase ambiguities inherent to each measurement frequency. The filtering algorithm is based on local polynomial (LPA) approximation for design of nonlinear filters (estimators) and adaptation of these filters to unknown smoothness of the spatially varying absolute phase [9]. For phase unwrapping, from filtered periodized data, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [1]. Simulations give evidence that the proposed algorithm yields state-of-the-art performance for continuous as well as for discontinues phase surfaces, enabling phase unwrapping in extraordinary difficult situations when all other algorithms fail.

  2. Amplitude differences least squares method applied to temporal cardiac beat alignment

    NASA Astrophysics Data System (ADS)

    Correa, R. O.; Laciar, E.; Valentinuzzi, M. E.

    2007-11-01

    High resolution averaged ECG is an important diagnostic technique in post-infarcted and/or chagasic patients with high risk of ventricular tachycardia (VT). It calls for precise determination of the synchronism point (fiducial point) in each beat to be averaged. Cross-correlation (CC) between each detected beat and a reference beat is, by and large, the standard alignment procedure. However, the fiducial point determination is not precise in records contaminated with high levels of noise. Herein, we propose an alignment procedure based on the least squares calculation of the amplitude differences (LSAD) between the ECG samples and a reference or template beat. Both techniques, CC and LSAD, were tested in high resolution ECG's corrupted with white noise and 50 Hz line interference of varying amplitudes (RMS range: 0-100μV). Results point out that LSDA produced a lower alignment error in all contaminated records while in those blurred by power line interference better results were found only within the 0-40 μV range. It is concluded that the proposed method represents a valid alignment alternative.

  3. Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches

    USGS Publications Warehouse

    Brooks, Wesley R.; Fienen, Michael N.; Corsi, Steven R.

    2013-01-01

    At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.

  4. Least squares evaluations for form and profile errors of ellipse using coordinate data

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Xu, Guanghua; Liang, Lin; Zhang, Qing; Liu, Dan

    2016-09-01

    To improve the measurement and evaluation of form error of an elliptic section, an evaluation method based on least squares fitting is investigated to analyze the form and profile errors of an ellipse using coordinate data. Two error indicators for defining ellipticity are discussed, namely the form error and the profile error, and the difference between both is considered as the main parameter for evaluating machining quality of surface and profile. Because the form error and the profile error rely on different evaluation benchmarks, the major axis and the foci rather than the centre of an ellipse are used as the evaluation benchmarks and can accurately evaluate a tolerance range with the separated form error and profile error of workpiece. Additionally, an evaluation program based on the LS model is developed to extract the form error and the profile error of the elliptic section, which is well suited for separating the two errors by a standard program. Finally, the evaluation method about the form and profile errors of the ellipse is applied to the measurement of skirt line of the piston, and results indicate the effectiveness of the evaluation. This approach provides the new evaluation indicators for the measurement of form and profile errors of ellipse, which is found to have better accuracy and can thus be used to solve the difficult of the measurement and evaluation of the piston in industrial production.

  5. Radioisotopic neutron transmission spectrometry: Quantitative analysis by using partial least-squares method.

    PubMed

    Kim, Jong-Yun; Choi, Yong Suk; Park, Yong Joon; Jung, Sung-Hee

    2009-01-01

    Neutron spectrometry, based on the scattering of high energy fast neutrons from a radioisotope and slowing-down by the light hydrogen atoms, is a useful technique for non-destructive, quantitative measurement of hydrogen content because it has a large measuring volume, and is not affected by temperature, pressure, pH value and color. The most common choice for radioisotope neutron source is (252)Cf or (241)Am-Be. In this study, (252)Cf with a neutron flux of 6.3x10(6)n/s has been used as an attractive neutron source because of its high flux neutron and weak radioactivity. Pulse-height neutron spectra have been obtained by using in-house built radioisotopic neutron spectrometric system equipped with (3)He detector and multi-channel analyzer, including a neutron shield. As a preliminary study, polyethylene block (density of approximately 0.947g/cc and area of 40cmx25cm) was used for the determination of hydrogen content by using multivariate calibration models, depending on the thickness of the block. Compared with the results obtained from a simple linear calibration model, partial least-squares regression (PLSR) method offered a better performance in a quantitative data analysis. It also revealed that the PLSR method in a neutron spectrometric system can be promising in the real-time, online monitoring of the powder process to determine the content of any type of molecules containing hydrogen nuclei.

  6. Distributed weighted least-squares estimation with fast convergence for large-scale systems☆

    PubMed Central

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976

  7. A Nonlinear Adaptive Beamforming Algorithm Based on Least Squares Support Vector Regression

    PubMed Central

    Wang, Lutao; Jin, Gang; Li, Zhengzhou; Xu, Hongbin

    2012-01-01

    To overcome the performance degradation in the presence of steering vector mismatches, strict restrictions on the number of available snapshots, and numerous interferences, a novel beamforming approach based on nonlinear least-square support vector regression machine (LS-SVR) is derived in this paper. In this approach, the conventional linearly constrained minimum variance cost function used by minimum variance distortionless response (MVDR) beamformer is replaced by a squared-loss function to increase robustness in complex scenarios and provide additional control over the sidelobe level. Gaussian kernels are also used to obtain better generalization capacity. This novel approach has two highlights, one is a recursive regression procedure to estimate the weight vectors on real-time, the other is a sparse model with novelty criterion to reduce the final size of the beamformer. The analysis and simulation tests show that the proposed approach offers better noise suppression capability and achieve near optimal signal-to-interference-and-noise ratio (SINR) with a low computational burden, as compared to other recently proposed robust beamforming techniques.

  8. Combined Helmholtz equation-least squares method for reconstructing acoustic radiation from arbitrarily shaped objects

    NASA Astrophysics Data System (ADS)

    Wu, Sean F.; Zhao, Xiang

    2002-07-01

    A combined Helmholtz equation-least squares (CHELS) method is developed for reconstructing acoustic radiation from an arbitrary object. This method combines the advantages of both the HELS method and the Helmholtz integral theory based near-field acoustic holography (NAH). As such it allows for reconstruction of the acoustic field radiated from an arbitrary object with relatively few measurements, thus significantly enhancing the reconstruction efficiency. The first step in the CHELS method is to establish the HELS formulations based on a finite number of acoustic pressure measurements taken on or beyond a hypothetical spherical surface that encloses the object under consideration. Next enough field acoustic pressures are generated using the HELS formulations and taken as the input to the Helmholtz integral formulations implemented through the boundary element method (BEM). The acoustic pressure and normal component of the velocity at the discretized nodes on the surface are then determined by solving two matrix equations using singular value decomposition (SVD) and regularization techniques. Also presented are in-depth analyses of the advantages and limitations of the CHELS method. Examples of reconstructing acoustic radiation from separable and nonseparable surfaces are demonstrated. copyright 2002 Acoustical Society of America.

  9. Lossless compression of hyperspectral images using conventional recursive least-squares predictor with adaptive prediction bands

    NASA Astrophysics Data System (ADS)

    Gao, Fang; Guo, Shuxu

    2016-01-01

    An efficient lossless compression scheme for hyperspectral images using conventional recursive least-squares (CRLS) predictor with adaptive prediction bands is proposed. The proposed scheme first calculates the preliminary estimates to form the input vector of the CRLS predictor. Then the number of bands used in prediction is adaptively selected by an exhaustive search for the number that minimizes the prediction residual. Finally, after prediction, the prediction residuals are sent to an adaptive arithmetic coder. Experiments on the newer airborne visible/infrared imaging spectrometer (AVIRIS) images in the consultative committee for space data systems (CCSDS) test set show that the proposed scheme yields an average compression performance of 3.29 (bits/pixel), 5.57 (bits/pixel), and 2.44 (bits/pixel) on the 16-bit calibrated images, the 16-bit uncalibrated images, and the 12-bit uncalibrated images, respectively. Experimental results demonstrate that the proposed scheme obtains compression results very close to clustered differential pulse code modulation-with-adaptive-prediction-length, which achieves best lossless compression performance for AVIRIS images in the CCSDS test set, and outperforms other current state-of-the-art schemes with relatively low computation complexity.

  10. Evaluation of unconfined-aquifer parameters from pumping test data by nonlinear least squares

    USGS Publications Warehouse

    Heidari, M.; Moench, A.

    1997-01-01

    Nonlinear least squares (NLS) with automatic differentiation was used to estimate aquifer parameters from drawdown data obtained from published pumping tests conducted in homogeneous, water-table aquifers. The method is based on a technique that seeks to minimize the squares of residuals between observed and calculated drawdown subject to bounds that are placed on the parameter of interest. The analytical model developed by Neuman for flow to a partially penetrating well of infinitesimal diameter situated in an infinite, homogeneous and anisotropic aquifer was used to obtain calculated drawdown. NLS was first applied to synthetic drawdown data from a hypothetical but realistic aquifer to demonstrate that the relevant hydraulic parameters (storativity, specific yield, and horizontal and vertical hydraulic conductivity) can be evaluated accurately. Next the method was used to estimate the parameters at three field sites with widely varying hydraulic properties. NLS produced unbiased estimates of the aquifer parameters that are close to the estimates obtained with the same data using a visual curve-matching approach. Small differences in the estimates are a consequence of subjective interpretation introduced in the visual approach.

  11. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  12. Recursive least squares approach to calculate motion parameters for a moving camera

    NASA Astrophysics Data System (ADS)

    Chang, Samuel H.; Fuller, Joseph; Farsaie, Ali; Elkins, Les

    2003-10-01

    The increase in quality and the decrease in price of digital camera equipment have led to growing interest in reconstructing 3-dimensional objects from sequences of 2-dimensional images. The accuracy of the models obtained depends on two sets of parameter estimates. The first is the set of lens parameters - focal length, principal point, and distortion parameters. The second is the set of motion parameters that allows the comparison of a moving camera"s desired location to a theoretical location. In this paper, we address the latter problem, i.e. the estimation of the set of 3-D motion parameters from data obtained with a moving camera. We propose a method that uses Recursive Least Squares for camera motion parameter estimation with observation noise. We accomplish this by calculation of hidden information through camera projection and minimization of the estimation error. We then show how a filter based on the motion parameters estimates may be designed to correct for the errors in the camera motion. The validity of the approach is illustrated by the presentation of experimental results obtained using the methods described in the paper.

  13. Improvement of structural models using covariance analysis and nonlinear generalized least squares

    NASA Technical Reports Server (NTRS)

    Glaser, R. J.; Kuo, C. P.; Wada, B. K.

    1992-01-01

    The next generation of large, flexible space structures will be too light to support their own weight, requiring a system of structural supports for ground testing. The authors have proposed multiple boundary-condition testing (MBCT), using more than one support condition to reduce uncertainties associated with the supports. MBCT would revise the mass and stiffness matrix, analytically qualifying the structure for operation in space. The same procedure is applicable to other common test conditions, such as empty/loaded tanks and subsystem/system level tests. This paper examines three techniques for constructing the covariance matrix required by nonlinear generalized least squares (NGLS) to update structural models based on modal test data. The methods range from a complicated approach used to generate the simulation data (i.e., the correct answer) to a diagonal matrix based on only two constants. The results show that NGLS is very insensitive to assumptions about the covariance matrix, suggesting that a workable NGLS procedure is possible. The examples also indicate that the multiple boundary condition procedure more accurately reduces errors than individual boundary condition tests alone.

  14. Evaluation of milk compositional variables on coagulation properties using partial least squares.

    PubMed

    Bland, Julie H; Grandison, Alistair S; Fagan, Colette C

    2015-02-01

    The aim of this study was to investigate the effects of numerous milk compositional factors on milk coagulation properties using Partial Least Squares (PLS). Milk from herds of Jersey and Holstein-Friesian cattle was collected across the year and blended (n=55), to maximise variation in composition and coagulation. The milk was analysed for casein, protein, fat, titratable acidity, lactose, Ca2+, urea content, micelles size, fat globule size, somatic cell count and pH. Milk coagulation properties were defined as coagulation time, curd firmness and curd firmness rate measured by a controlled strain rheometer. The models derived from PLS had higher predictive power than previous models demonstrating the value of measuring more milk components. In addition to the well-established relationships with casein and protein levels, CMS and fat globule size were found to have as strong impact on all of the three models. The study also found a positive impact of fat on milk coagulation properties and a strong relationship between lactose and curd firmness, and urea and curd firmness rate, all of which warrant further investigation due to current lack of knowledge of the underlying mechanism. These findings demonstrate the importance of using a wider range of milk compositional variables for the prediction of the milk coagulation properties, and hence as indicators of milk suitability for cheese making.

  15. Metafitting: Weight optimization for least-squares fitting of PTTI data

    NASA Technical Reports Server (NTRS)

    Douglas, Rob J.; Boulanger, J.-S.

    1995-01-01

    For precise time intercomparisons between a master frequency standard and a slave time scale, we have found it useful to quantitatively compare different fitting strategies by examining the standard uncertainty in time or average frequency. It is particularly useful when designing procedures which use intermittent intercomparisons, with some parameterized fit used to interpolate or extrapolate from the calibrating intercomparisons. We use the term 'metafitting' for the choices that are made before a fitting procedure is operationally adopted. We present methods for calculating the standard uncertainty for general, weighted least-squares fits and a method for optimizing these weights for a general noise model suitable for many PTTI applications. We present the results of the metafitting of procedures for the use of a regular schedule of (hypothetical) high-accuracy frequency calibration of a maser time scale. We have identified a cumulative series of improvements that give a significant reduction of the expected standard uncertainty, compared to the simplest procedure of resetting the maser synthesizer after each calibration. The metafitting improvements presented include the optimum choice of weights for the calibration runs, optimized over a period of a week or 10 days.

  16. A Nonlinear Least Squares Approach to Time of Death Estimation Via Body Cooling.

    PubMed

    Rodrigo, Marianito R

    2016-01-01

    The problem of time of death (TOD) estimation by body cooling is revisited by proposing a nonlinear least squares approach that takes as input a series of temperature readings only. Using a reformulation of the Marshall-Hoare double exponential formula and a technique for reducing the dimension of the state space, an error function that depends on the two cooling rates is constructed, with the aim of minimizing this function. Standard nonlinear optimization methods that are used to minimize the bivariate error function require an initial guess for these unknown rates. Hence, a systematic procedure based on the given temperature data is also proposed to determine an initial estimate for the rates. Then, an explicit formula for the TOD is given. Results of numerical simulations using both theoretical and experimental data are presented, both yielding reasonable estimates. The proposed procedure does not require knowledge of the temperature at death nor the body mass. In fact, the method allows the estimation of the temperature at death once the cooling rates and the TOD have been calculated. The procedure requires at least three temperature readings, although more measured readings could improve the estimates. With the aid of computerized recording and thermocouple detectors, temperature readings spaced 10-15 min apart, for example, can be taken. The formulas can be straightforwardly programmed and installed on a hand-held device for field use.

  17. A bifurcation identifier for IV-OCT using orthogonal least squares and supervised machine learning.

    PubMed

    Macedo, Maysa M G; Guimarães, Welingson V N; Galon, Micheli Z; Takimura, Celso K; Lemos, Pedro A; Gutierrez, Marco Antonio

    2015-12-01

    Intravascular optical coherence tomography (IV-OCT) is an in-vivo imaging modality based on the intravascular introduction of a catheter which provides a view of the inner wall of blood vessels with a spatial resolution of 10-20 μm. Recent studies in IV-OCT have demonstrated the importance of the bifurcation regions. Therefore, the development of an automated tool to classify hundreds of coronary OCT frames as bifurcation or nonbifurcation can be an important step to improve automated methods for atherosclerotic plaques quantification, stent analysis and co-registration between different modalities. This paper describes a fully automated method to identify IV-OCT frames in bifurcation regions. The method is divided into lumen detection; feature extraction; and classification, providing a lumen area quantification, geometrical features of the cross-sectional lumen and labeled slices. This classification method is a combination of supervised machine learning algorithms and feature selection using orthogonal least squares methods. Training and tests were performed in sets with a maximum of 1460 human coronary OCT frames. The lumen segmentation achieved a mean difference of lumen area of 0.11 mm(2) compared with manual segmentation, and the AdaBoost classifier presented the best result reaching a F-measure score of 97.5% using 104 features.

  18. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data.

    PubMed

    Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P A; Schmid, Adrien W

    2016-03-31

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis.

  19. Large-scale computation of incompressible viscous flow by least-squares finite element method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.

  20. A least-squares parameter estimation algorithm for switched hammerstein systems with applications to the VOR

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Kearney, Robert E.; Galiana, Henrietta L.

    2005-01-01

    A "Multimode" or "switched" system is one that switches between various modes of operation. When a switch occurs from one mode to another, a discontinuity may result followed by a smooth evolution under the new regime. Characterizing the switching behavior of these systems is not well understood and, therefore, identification of multimode systems typically requires a preprocessing step to classify the observed data according to a mode of operation. A further consequence of the switched nature of these systems is that data available for parameter estimation of any subsystem may be inadequate. As such, identification and parameter estimation of multimode systems remains an unresolved problem. In this paper, we 1) show that the NARMAX model structure can be used to describe the impulsive-smooth behavior of switched systems, 2) propose a modified extended least squares (MELS) algorithm to estimate the coefficients of such models, and 3) demonstrate its applicability to simulated and real data from the Vestibulo-Ocular Reflex (VOR). The approach will also allow the identification of other nonlinear bio-systems, suspected of containing "hard" nonlinearities.